Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-facebook-contributes-to-mlperf-and-open-sources-mask-r-cnn2go-its-cv-framework-for-embedded-and-mobile-devices
Amrata Joshi
13 Dec 2018
3 min read
Save for later

Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices

Amrata Joshi
13 Dec 2018
3 min read
Yesterday, Facebook announced its contribution towards MLPerf, a benchmark suite of tests for providing guidelines to measure AI training and inference speed. Facebook also announced that the team is open-sourcing Mask R-CNN2Go, the leading-edge computer vision model optimized for embedded and mobile devices. Why is Facebook supporting MLPerf? MLPerf helps in building a common set of industry-wide benchmarks in order to measure the system-level performance of machine learning frameworks, cloud platforms, and hardware accelerators. This benchmark suite covers a set of application use cases such as object detection, image classification, speech to text translation, etc. By developing industry-standard ML models and benchmarks, researchers and engineers will get a chance to evaluate and demonstrate the impact of their work. How will this collaboration prove to be beneficial? The Facebook team and MLPerf Edge Inference working group have come together to provide benchmark references, trained with open source data sets for the Edge Inference category. For the image classification use case, they will provide the implementation for the state-of-the-art ShuffleNet model. They will also provide the implementation for the Mask R-CNN2Go model for the pose estimation use case. Representative benchmarks for edge inference use cases will be defined to characterize performance bottlenecks of on-device inference execution. Mask R-CNN2Go The human detection and segmentation model, based on the Mask R-CNN framework is a simple, flexible, and general framework for object detection and segmentation. It is used to detect objects in an image while predicting key points and also generating a segmentation mask for each object. To run the Mask R-CNN models in real-time in mobile devices, researchers and engineers from Camera, AML and Facebook AI Research (FAIR) teams came together and built a lightweight framework, Mask R-CNN2Go. Check out the video here. Mask R-CNN2Go forms the basis of on-device ML use cases such as person segmentation, object detection, classification, and body pose estimation that enables accurate, real-time inference. It is mainly designed and optimized for mobile devices and is used for creating entertaining experiences on mobile devices like, hand tracking in the “Control the Rain” augmented reality (AR) effect in Facebook Camera. Mask R-CNN2Go model consists of the following components: Trunk model: It contains multiple convolutional layers that generate deep feature representations of the input image. Region proposal network (RPN): It proposes candidate objects at predefined scales and aspect ratios. Detection head: It contains a set of pooling, convolution, and fully-connected layers. The detection head helps in refining the bounding box coordinates and grouping neighboring boxes with non-max suppression. Key point head: It helps in predicting a mask for each predefined key point on the body. Currently, Mask R-CNN2Go runs on Caffe2 but might soon run on PyTorch 1.0 as the machine learning framework is adding more capabilities to provide developers with a seamless path from research to production. Read more about this news on a post by Facebook. Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?
Read more
  • 0
  • 0
  • 2678

article-image-uber-manager-warned-the-leadership-team-of-the-inadequacy-of-safety-procedures-in-their-prototype-robo-taxis-early-march-reports-the-information
Sugandha Lahoti
13 Dec 2018
3 min read
Save for later

Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information

Sugandha Lahoti
13 Dec 2018
3 min read
In a fatal accident on March 19, Uber’s prototype self-driving car struck and killed a pedestrian in Arizona. This incident raised alarms about safety problems in self-driving tech and Uber was criticized. In a shocking revelation made yesterday, The Information reported, that days before the fatal accident, an Uber manager tried to warn the company’s top executives about the danger. Robbie Miller, a manager in the testing-operations group, sent an email to Eric Meyhofer, the head of Uber’s autonomous vehicle unit, Jon Thomason, VP of software, and five other executives and lawyers on March 13. He warned them about the dangers of the software powering the company’s prototype robo-taxis. He also warned that the human backup drivers in the vehicles weren’t properly trained to do their jobs, the Information reports. What did Miller’s email say In his email, Miller pointed to an incident in November 2017, when an Uber car had nearly caused a crash. He prepared a report and urged the Uber team to investigate but was ignored. He was told that “incidents like that happen all of the time." Per Miller, “A car was damaged nearly every other day in February," Miller said. "We shouldn’t be hitting things every 15,000 miles." Miller was part of Uber's self-driving truck project, which he described as having relatively good safety procedures. The other projects focused on cars, and Miller argued that its safety procedures were extremely inadequate. In his report, Miller mentioned several ways to improve safety. He suggested Uber put two people in every vehicle. The driver should focus on the road while the other passenger can monitor the driving software and log misbehavior. Miller also argued that Uber should drastically scale back its testing program. "I suspect an 85% reduction in fleet size wouldn’t slow development," he wrote. Moreover, he wanted Uber to take strict actions against the fleet in case of a car crash. Everyone involved in the self-driving car project from developers to safety drivers should be given the authority to ground the fleet if they see a safety problem. He also wanted more personnel to have access to Uber's incident reporting database. People on the internet expressed their disdain over Uber’s safety neglect and sided with Miller. https://twitter.com/dhh/status/1072972633308688384 https://twitter.com/amir/status/1072508806935076864 https://twitter.com/sudo_lindenberg/status/1072669780899958789 Responding to the Information’s report, Uber said that “the entire team is focused on safely and responsibly returning to the road in self-driving mode,” The company intends to eventually resume on-the-road self-driving testing, but it will do so “only when these improvements have been implemented and we have received authorization from the Pennsylvania Department of Transportation.” This story first appeared on The Information. Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way. Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016. Uber’s new family of AI algorithms sets records on Pitfall and solves the entire game of Montezuma’s Revenge
Read more
  • 0
  • 0
  • 2323

article-image-nyu-and-aws-introduce-deep-graph-library-dgl-a-python-package-to-build-neural-network-graphs
Prasad Ramesh
13 Dec 2018
2 min read
Save for later

NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs

Prasad Ramesh
13 Dec 2018
2 min read
Introducing a new library called Deep Graph Library (DGL) developed by the NYU & AWS teams, Shanghai. DGL is a package built on Python to simplify deep learning on graph, atop of existing deep learning frameworks. DGL is essentially a Python package which serves as an interface between any existing tensor libraries and data that is expressed as graphs. It helps in easy implementation of graph neural networks such as Graph Convolution Networks, TreeLSTM and others. It also maintains high computation efficiency while doing this. This new Python library is made in an effort to make graph implementations in deep learning simpler. According to the results they state, the improvement on some models is as high as 10 times and has better accuracy in some cases. Check out the results on GitHub. Their website states: “We are keen to bring graphs closer to deep learning researchers. We want to make it easy to implement graph neural networks model family. We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible.” As of now, DGL supports PyTorch v1.0. The autobatching is up to 4x faster than DyNet. DGL is tested on Ubuntu 16.04, macOS X, and Windows 10 and will work on any newer versions of these OSes. Python 3.5 or later is required while Python 3.4 or older is not tested. Support for Python 2 is in the works. Installing it is as same as any other Python package. With pip: pip install dgl And with conda: conda install -c dglteam dgl https://twitter.com/aussetg/status/1072897828677144582 DGL is currently in the beta stage, licensed under Apache 2.0, and they have a Twitter page. You can check out DGL at their website. UK researchers have developed a new PyTorch framework for preserving privacy in deep learning OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners Deep Learning Indaba presents the state of Natural Language Processing in 2018
Read more
  • 0
  • 0
  • 5207
Visually different images

article-image-instaclustr-releases-three-open-source-projects-for-apache-cassandra-database-users
Sugandha Lahoti
13 Dec 2018
2 min read
Save for later

Instaclustr releases three open source projects for Apache Cassandra database users

Sugandha Lahoti
13 Dec 2018
2 min read
Yesterday, Instaclustr released three open source projects easing Cassandra-Kubernetes Integration and LDAP/Kerberos Authentication. These three projects include a Cassandra operator for Kubernetes, LDAP authenticator, and Kerberos authenticator plugins. Cassandra on Kubernetes The Cassandra operator functions as a Cassandra-as-a-Service on Kubernetes, making it easier for developers to combine these technologies. The Cassandra operator provides a consistent environment and set of operations that are reproducible across production clusters and development, staging, and QA environments. The Cassandra operator is now ready to use in development environments through GitHub. Enterprise support for the Cassandra operator will start next year. LDAP authenticator The LDAP authenticator plug-in enables developers to benefit from the secure LDAP authentication without any need to write their own solutions and to transition to using the authenticator with zero downtime. The LDAP authenticator is freely available on GitHub, along with setup and usage instructions. Kerberos authenticator The Kerberos authenticator makes Kerberos’ secure authentication and true single sign-on capabilities available to developers using Apache Cassandra. This project also includes a Kerberos authenticator plugin for the Cassandra Java driver. “With these open source projects, we’ve set out to empower any developer who wishes to pair Cassandra with Kubernetes, or take advantage of LDAP or Kerberos authentication within their Cassandra deployments. We invite anyone interested to join our community of contributors, and suggest or offer improvements to these open source projects.” said Ben Bromhead, CTO. ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features. cstar: Spotify’s Cassandra orchestration tool is now open source! Twitter adopts Apache Kafka as their Pub/Sub System
Read more
  • 0
  • 0
  • 1876

article-image-google-kills-another-product-fusion-tables
Prasad Ramesh
12 Dec 2018
3 min read
Save for later

Google kills another product: Fusion tables

Prasad Ramesh
12 Dec 2018
3 min read
Yesterday, Google announced in a support page that they will ‘turn down’ Google Fusion tables. Google Fusion tables users got an email yesterday saying that the service will be retired on December 3, 2019. With Fusion tables, users can plot data on a Google Map without any coding. Fusion Tables is popular among journalists, scientists and other non-technical groups that use data visualization regularly. Google encouraged users to switch to other products, like its BigQuery cloud data warehouse system, its Google Data Studio business intelligence tool, or simply Google Sheets. The company says it’s also working to make other mapping tools, currently used internally, available. In their blog post, Google has mentioned alternatives to Fusion tables. They are: Google BigQuery Google Cloud SQL Google Sheets Google Data Studio Other tools that will be available in the coming months The email from Google, reads: “Google Fusion Tables was launched almost nine years ago as a research project in Google Labs, later evolving into an experimental product. For a long time, it was one of the few free tools for easily visualizing large datasets, especially on a map. Since then, Google has developed several alternatives, providing deeper experiences in more specialized domains.” Any maps using a Fusion Tables Layer in the Maps JavaScript API v3.37 will be met with errors from August 2019. You can download your data in various formats before Fusion tables ‘turns down’. The available formats are CSV, KML, and KML Network Link. A comment from Hacker news reads: “I stopped teaching FT because several years ago because it seemed clear, in an implicit way, that it wasn't getting the traction. I hardly ever heard anyone inside or outside of Google talk/tweet/etc about it, in the same way people do for Sheets or BigQuery. I missed the easy data-to-interactive-map workflow for teaching, but for production work, FT was just too clunky (and merge far too limited compared to a SQL join) to justify using as a data store.” https://twitter.com/Vince_Dixon_/status/1072556568665866241 https://twitter.com/jackserle/status/1072800316289179648 Seems like Fusion table never got enough traction, a story we have seem played with other recently retiring Google products like G+ and Allo. For more details, visit the Google support page. Google to discontinue Allo; plans to power ‘Messages’ with Rich Communication Services (RCS) Chat Google+ affected by another bug, 52M users compromised, shut down within 90 days Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”
Read more
  • 0
  • 0
  • 3066

article-image-as-pichai-defends-googles-integrity-ahead-of-todays-congress-hearing-over-60-ngos-ask-him-to-defend-human-rights-by-dropping-dragonfly
Natasha Mathur
11 Dec 2018
3 min read
Save for later

As Pichai defends Google’s “integrity” ahead of today’s Congress hearing, over 60 NGOs ask him to defend human rights by dropping DragonFly

Natasha Mathur
11 Dec 2018
3 min read
Google CEO, Sundar Pichai, is going to testify before the House Judiciary Committee today. He has submitted a written testimony to the House Committee ahead of the hearing. Pichai points out in the testimony that there is no “political bias” within the company. “I lead this company without political bias and work to ensure that our products continue to operate that way. To do otherwise would go against our core principles and our business interests”. He also talks about data security emphasizing that protecting the privacy and security of their users has always been an “essential mission” for the organization. Pichai adds how Google has been consistently putting in an enormous amount of work over the past years to bring “choice, transparency, and control” to its users. Pichai also highlighted how users look up to Google for accurate and trusted information, and how they work very hard at Google to maintain the “integrity” of their products, in order to live up to their standards. The testimony further talks about Google’s contribution to the US economy and military, pointing out that despite Google’s expansion and growth into new markets, it will always have “American roots”. Now, although the hearing titled “Transparency & Accountability: Examining Google and its Data Collection, Use, and Filtering Practices” will be focussed around discussions regarding the potential bias and need for transparency within Google, its infamous project Dragonfly will also almost certainly be discussed. Google has been facing continued criticism for its censored Chinese search engine which was revealed earlier this year in a bombshell report by the Intercept. Yesterday, more than 60 NGOs as well as individuals including Edward Snowden, signed an open letter protesting against Google’s Project Dragonfly and its other plans for China. “We are disappointed that Google in its letter of 26th October failed to address the serious concerns of human rights groups over Project Dragonfly”, reads the letter addressed to Pichai. It talks about how Google’s response along with other details about Project Dragonfly only intensifies the fear that Google may compromise its commitments to human rights to gain an access to the Chinese search market. The letter also sheds light on new details leaked to the media suggesting if Google launches Project Dragonfly then it would accelerate “repressive state censorship, surveillance, and other violations” affecting almost a billion people in China. The letter also talks about how despite Google stating that it’s “not close” to launching a search product in China and that it’ll consult with key stakeholders before doing so, media reports say otherwise. The media reports based on an internal Google memo suggested that the project was in a ‘pretty advanced state’ and that the company had invested extensive resources for the development of this project. “We welcome that Google has confirmed the company “takes seriously” its responsibility to respect human rights. However, the company has so far failed to explain how it reconciles that responsibility with the company’s decision to design a product purpose-built to undermine the rights to freedom of expression and privacy”, reads the letter. Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept Google employees join hands with Amnesty International urging Google to drop Project Dragonfly OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 2020
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-google-researchers-introduce-jax-a-tensorflow-like-framework-for-generating-high-performance-code-from-python-and-numpy-machine-learning-programs
Bhagyashree R
11 Dec 2018
2 min read
Save for later

Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs

Bhagyashree R
11 Dec 2018
2 min read
Google researchers have build a tool called JAX, a domain-specific tracing JIT compiler, which generates high-performance accelerator code from pure Python and Numpy machine learning programs. It combines Autograd and XLA for high-performance machine learning research. At its core, it is an extensible system for transforming numerical functions. Autograd helps JAX automatically differentiate native Python and Numpy code. It can handle a large subset of Python features such as loops, branches, recursion, and closures. It comes with support for reverse-mode (backpropagation) and forward-mode differentiation, and these two can be composed arbitrarily in any order. XLA or Accelerated Linear Algebra is a linear algebra compiler used for optimizing TensorFlow computations. To run the NumPy programs on GPUs and TPUs, JAX uses XLA. The library calls are compiled and executed just-in-time. JAX also allows compiling your own Python functions just-in-time into XLA-optimized kernels using a one-function API, jit. How JAX works? The basic function of JAX is specializing and translating high-level Python and NumPy functions into a representation that can be transformed and then lifted back into a Python function. It traces Python functions by monitoring all the basic operations applied to its input to produce output and then records these operations and the data-flow between them in a directed acyclic graph (DAG). For tracing the functions, it wraps primitive operations and when they’re called they add themselves to a list of operations performed along with their inputs and outputs. In order to keep track of the data flow between these primitive operations, the values being tracked are wrapped in the Tracer class instances. The team is working towards expanding this project and provide support for cloud TPU, multi-GPU, and multi-TPU. In future, it will come with full NumPy coverage and some SciPy coverage, and more. As this is still a research project, we can expect bugs and is not recommended to be used in production. To read more in detail and contribute to this project, head over to GitHub. Google AdaNet, a TensorFlow-based AutoML framework Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google
Read more
  • 0
  • 0
  • 14456

article-image-github-introduces-content-attachments-api-beta
Amrata Joshi
11 Dec 2018
3 min read
Save for later

GitHub introduces Content Attachments API (beta)

Amrata Joshi
11 Dec 2018
3 min read
Yesterday, the team at GitHub released Content Attachments API (beta) that allows the GitHub app to provide more information for URLs that are linked to registered domains. https://twitter.com/github/status/1072205045469405184 Why Content Attachments API? Users share a lot of links on GitHub and nearly one-third of the comments on pull requests and issues include a link. Each link has some important content attached to it. And while going through each of the links, the user gets navigated away from the current context and loses their focus. This process is time-consuming and at times affects productivity. With Content Attachments API, the content behind each URL can be embedded directly in the conversation on GitHub. New GitHub apps using Content Attachments API RunKit RunKit, is an interactive Node environment that makes it easy for the users to file, reproducible and runnable bug reports for Node.js projects. RunKit notebooks packages the environment within a container that could be shared with a URL for giving access to project maintainers. With the help of Content Attachments API, a RunKit link can now show the contents of the entire notebook and its output. LeanBoard LeanBoard, a whiteboard with sticky notes helps the remote teams to collaborate in real-time. A snapshot of the board can also be pulled into the related GitHub issue. Now, with Content Attachments API, it is possible to drop a link in an issue or pull request to preserve the conversation. The screenshots in content attachments also get updated automatically, every five minutes, as the board changes. CloudApp CloudApp features screen recording, video messaging, screenshot annotation, and GIF creation. With CloudApp and the Content Attachments API, users can paste URL for rendering a GIF or screenshot in an issue. Lucidchart With Lucidchart, users can create and collaborate on architecture diagrams, flowcharts,  mockups, user flows, and other visuals in real time. With Content Attachments API, users can now add these visuals to a GitHub issue and these diagrams automatically update as the system gets updated. https://twitter.com/lucidchart/status/1072187150282575872 It would be interesting to see if GitHub also implements frame previews in the issues or pull requests. Users are still curious about this release and have questions if the Content Attachments API support iframes or just the markdown. https://twitter.com/pomber/status/1072242840938385410 To get started with how to use the Content Attachments API, check out GitHub’s blog. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub acquires Spectrum, a community-centric conversational platform GitHub Octoverse: The top programming languages of 2018
Read more
  • 0
  • 0
  • 2457

article-image-microsoft-calls-on-governments-to-regulate-facial-recognition-tech-now-before-it-is-too-late
Melisha Dsouza
10 Dec 2018
5 min read
Save for later

Microsoft calls on governments to regulate Facial recognition tech now, before it is too late

Melisha Dsouza
10 Dec 2018
5 min read
Last Week, Microsoft President- Brad Smith, published a blog post, requesting governments to regulate the rapid evolution of Facial Recognition technology. Along with all the merits that this technology offers, Brad states that the tech has the potential to be abused. He urges that 2019 be the year that governments focus on regulating the tech, because “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.” The post highlights how Microsoft and tech companies will need to start creating safeguards to address facial recognition technology and its potential chances of abuse. Along with the support of governments and the tech sector, Microsoft believes that facial recognition technology can create positive societal benefits. Considering that major tech giants like Amazon and Google have been facing backlash on providing their facial recognition technology to the Government, citizens need assurance that this technology will only have positive societal impacts. Smith lists 3 important problems in this area that need to be addressed with government assistance: Certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination. The widespread use of this technology can lead to new intrusions into people’s privacy. The use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms. How can legislation help? #1 Issue: address bias and discrimination Microsoft claims that they and other tech companies have been actively working to identify and reduce these errors while improving the accuracy and quality of facial recognition tools and services. Laws are needed in this area as “market forces will work well only if potential customers are well-informed and able to test facial recognition technology for accuracy and risks of unfair bias, including biases that arise in the context of specific applications and environments.” They suggest that: The legislation should mandate tech companies (that offer facial recognition services) to provide easy to understand documentation, explaining the capabilities and limitations of the technology. The providers of commercial facial recognition services should enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias. Entities that deploy facial recognition should undertake a meaningful human review of facial recognition results, prior to making final decisions for what the law deems to be “consequential use cases” that affect consumers. It is also important for the entities that deploy facial recognition services to recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers. #2 Issue: Intrusion into people’s privacy Microsoft believes people deserve to know when this technology is being used, so they can ask questions and exercise some choice in the matter if they wish. This transparency is important for building public knowledge and confidence in this technology. For implementing the same, they suggest: Legislation should come up with laws that require the entities that use facial recognition to identify consumers, notify them that these services are being used. The law should specify that consumers are consenting to the use of facial recognition services when they enter premises or proceed to use online services that have this type of clear notice. #3 Issue: Use of facial recognition technology by a government can encroach domestic freedom Regarding issue 3, Microsoft elaborates how facial recognition technology could put fundamental freedoms at risk. Governments can use this technology with surveillance cameras and massive computing power and storage in the cloud,  to enable continuous surveillance of specific individuals. It could follow anyone anywhere at any time or even all the time. To prevent an encroachment on democratic freedoms, legislation should: Permit law enforcement agencies to use facial recognition to engage in ongoing surveillance of specified individuals in public spaces only when a court order has been obtained for the same. Where there is an emergency involving imminent danger or risk of death or serious physical injury to a person. Microsoft, itself, has brought four lawsuits against the U.S. government since 2013 to protect people’s privacy rights. Here are some comments from hacker news that caught our attention: Smith mentions that Microsoft intends to let six principles to guide the company's use of facial recognition going forward. They are: fairness, transparency, accountability, nondiscrimination, notice and consent, and lawful surveillance. He further adds that Microsoft will formalize these principles through further documents, with an eye toward implementing them before the end of March 2019. Head over to Smith’s full blog post to see his arguments and reasoning over Facial Recognition technology. DC Airport nabs first imposter using its newly deployed facial recognition security system Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles
Read more
  • 0
  • 0
  • 2226

article-image-researchers-unveil-a-new-algorithm-that-allows-analyzing-high-dimensional-data-sets-more-effectively-at-neurips-conference
Prasad Ramesh
10 Dec 2018
3 min read
Save for later

Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference

Prasad Ramesh
10 Dec 2018
3 min read
Researchers from Rochester Institute of Technology published a paper which describes a method to maintain speed and accuracy when dealing with high-dimensional data sets. What is the paper about? This paper titled Sparse Covariance Modeling in High Dimensions with Gaussian Processes studies the statistical relationships among components of high-dimensional observations. The researchers propose to model the changing covariances of observation elements as sparse multivariate stochastic processes. Particularly their novel covariance modeling method used reduces dimensionality. It does so by relating the observation vectors to a subspace with lower dimensions. The changing correlations are characterized by jointly modeling the latent factors and factor loadings as collections of basis functions. They vary with the covariates as Gaussian processes. The basis sparsity is encoded by automatic relevance determination (ARD) through the coefficients to account for inherent redundancy. The experiments conducted across various domains using this method show superior performances to the best current methods. What modeling methods are used? In many AI applications, there are complex relationships among different components of high-dimensional data sets. These relationships can change across non-random covariates, say, an experimental condition. Two examples listed in the paper which were also used in the experiments to test the method are as follows: In a computational gene regulatory network (GRN) interface, the topological structures of GRNs are context dependent. The interactions of gene activities will be different in different conditions like temperature, pH etc,. In a data set displaying crime occurrences, correlations are seen in spatially disjoint spaces but the spatial correlations occur over a period of time. In such cases, the modeling methods used typically combine heterogeneous data taken from different experimental conditions or sometimes in a single data set. The researchers have proposed a novel covariance modeling method that allows cov(y|x) = Σ(x) to change flexibly with X. One of the authors, Rui Li, stated: “This research is motivated by the increasing prevalence of high-dimensional data sets and the computational capacity to analyze and model their volatility and co-volatility varying over some covariates. The study proposed a methodology to scale to high dimensional observations by reducing the dimensions while preserving the latent information; it allows sharing information in the latent basis across covariates.” The results were better as compared to other methods in different experiments. It is robust in the choice of hyperparameters and produces a lower root mean square error (RMSE). This paper was presented at NeurIPS 2018, you can read it here. How NeurIPS 2018 is taking on its diversity and inclusion challenges Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?
Read more
  • 0
  • 0
  • 2017
article-image-pytorch-1-0-is-here-with-jit-c-api-and-new-distributed-packages
Natasha Mathur
10 Dec 2018
4 min read
Save for later

PyTorch 1.0 is here with JIT, C++ API, and new distributed packages

Natasha Mathur
10 Dec 2018
4 min read
It was just two months back when Facebook announced the release of PyTorch 1.0 RC1. Facebook is now out with the stable release of PyTorch 1.0. The latest release, which was announced last week at the NeurIPS conference, explores new features such as JIT, brand new distributed package, and Torch Hub, breaking changes, bug fixes and other improvements. PyTorch is an open source, deep learning python-based framework. “It accelerates the workflow involved in taking AI from research prototyping to production deployment, and makes it easier and more accessible to get started”, reads the announcement page. Let’s now have a look at what’s new in PyTorch 1.0 New Features JIT JIT is a set of compiler tools that is capable of bridging the gap between research in PyTorch and production. JIT enables the creation of models that have the capacity to run without any dependency on the Python interpreter. PyTorch 1.0 offers two ways using which you can make your existing code compatible with the JIT: using  torch.jit or torch.jit.script. Once the models have been annotated, Torch Script code can be optimized and serialized for later use in the new C++ API, which doesn't depend on Python. Brand New distributed package In PyTorch 1.0, the  new torch.distributed package and torch.nn.parallel.DistributedDataParallel comes backed with a brand new re-designed distributed library. Major highlights of the new library are as follows: The new torch.distributed is performance driven and operates entirely asynchronously for all backends such as Gloo, NCCL, and MPI. There are significant Distributed Data-Parallel performance improvements for hosts with slower networks such as Ethernet-based hosts. It comes with async support for all distributed collective operations in the torch.distributed package. C++ frontend [ API unstable] The C++ frontend is a complete C++ interface to the PyTorch backend. It follows the API and architecture of the established Python frontend and is meant to enable research in high performance, low latency and bare metal C++ applications. It also offers equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. PyTorch team has released C++ frontend marked as "API Unstable" as part of PyTorch 1.0. This is because although it is ready to use for research applications, it still needs to get more stabilized over future releases. Torch Hub Torch Hub refers to a pre-trained model repository that has been designed to facilitate research reproducibility. Torch Hub offers support for publishing pre-trained models (model definitions and pre-trained weights) to a github repository with the help of hubconf.py file. Once published, users can then load the pre-trained models with the help of torch.hub.load API. Breaking Changes Indexing a 0-dimensional tensor displays an error instead of warn. torch.legacy has been removed. torch.masked_copy_ is removed and hence, use torch.masked_scatter_ instead. torch.distributed: the TCP backend has been removed. It is recommended to use Gloo and MPI backends for CPU collectives and NCCL backend for GPU collectives. torch.tensor function with a Tensor argument can now return a detached Tensor (i.e. a Tensor where grad_fn is None) in PyTorch 1.0. torch.nn.functional.multilabel_soft_margin_loss now returns Tensors of shape (N,) instead of (N, C). This is to match the behaviour of torch.nn.MultiMarginLoss and it is also more numerically stable. Support for C extensions has been removed in PyTorch 1.0. Torch.utils.trainer has been deprecated. Bug fixes torch.multiprocessing has been fixed and now correctly handles CUDA tensors, requires_grad settings, and hooks. Memory leak during packing in tuples has been fixed. RuntimeError: storages that don't support slicing when loading models are saved with PyTorch 0.3, has been fixed. The issue with calculated output sizes of torch.nn.Conv modules with stride and dilation have been fixed. torch.dist has been fixed for infinity, zero and minus infinity norms. torch.nn.InstanceNorm1d has been fixed and now can correctly accept 2-dimensional inputs. torch.nn.Module.load_state_dict showed an incorrect error message that has been fixed. broadcasting bug in torch.distributions.studentT.StudentT has been fixed. Other Changes “Advanced Indexing" performance has been considerably improved on CPU as well as GPU. torch.nn.PReLU speed has been improved on both CPU and GPU. Printing large tensors has become faster. N-dimensional empty tensors have been added in PyTorch 1.0, which allows tensors with 0 elements to have arbitrary number of dimensions. They also support indexing and other torch operations. For more information, check out the official release notes. Can a production-ready Pytorch 1.0 give TensorFlow a tough time? Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support What is PyTorch and how does it work?
Read more
  • 0
  • 0
  • 4932

article-image-deepminds-alphazero-shows-unprecedented-growth-in-ai-masters-3-different-games
Sugandha Lahoti
07 Dec 2018
3 min read
Save for later

Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games

Sugandha Lahoti
07 Dec 2018
3 min read
Google’s DeepMind introduced AlphaZero last year as a reinforcement learning program that masters three different types of board games, Chess, Shogi and Go to beat world champions in each case. Yesterday, they announced that a full evaluation of AlphaZero has been published in the journal Science, which confirms and updates the preliminary results. The research paper describes how Deepmind’s AlphaZero learns each game from scratch, without any human intervention or no inbuilt domain knowledge but the basic rules of the game. Unlike traditional game playing programs, Deepmind’s AlphaZero uses deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm. The first play by the program is completely random. Over-time the system uses RL algorithms to learn from wins, losses and draws to adjust the parameters of the neural network. The amount of training varies taking approximately 9 hours for chess, 12 hours for shogi, and 13 days for Go. For searching, it uses Monte-Carlo Tree Search (MCTS)  to select the most promising moves in games. Testing and Evaluation Deepmind’s AlphaZero was tested against the best engines for chess (Stockfish), shogi (Elmo), and Go (AlphaGo Zero). All matches were played for three hours per game, plus an additional 15 seconds for each move. AlphaZero was able to beat all its component in each evaluation. Per Deepmind’s blog: In chess, Deepmind’s AlphaZero defeated the 2016 TCEC (Season 9) world champion Stockfish, winning 155 games and losing just six games out of 1,000. To verify the robustness of AlphaZero, it was also played on a series of matches that started from common human openings. In each opening, AlphaZero defeated Stockfish. It also played a match that started from the set of opening positions used in the 2016 TCEC world championship, along with a series of additional matches against the most recent development version of Stockfish, and a variant of Stockfish that uses a strong opening book. In all matches, AlphaZero won. In shogi, AlphaZero defeated the 2017 CSA world champion version of Elmo, winning 91.2% of games. In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games. AlphaZero’s ability to master three different complex games is an important progress towards building a single AI system that can solve a wide range of real-world problems and generalize to new situations. People on the internet are also highly excited about this new achievement. https://twitter.com/DanielKingChess/status/1070755986636488704 https://twitter.com/demishassabis/status/1070786070806192129 https://twitter.com/TrevorABranch/status/1070765877669187584 https://twitter.com/LeonWatson/status/1070777729015013376 https://twitter.com/Kasparov63/status/1070775097970094082 Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare. Google makes major inroads into healthcare tech by absorbing DeepMind Health. AlphaZero: The genesis of machine intuition
Read more
  • 0
  • 0
  • 3746

article-image-facebooks-artificial-intelligence-research-team-fair-turns-five-but-what-are-its-biggest-accomplishments
Prasad Ramesh
06 Dec 2018
4 min read
Save for later

Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?

Prasad Ramesh
06 Dec 2018
4 min read
Facebook’s artificial intelligence research group - FAIR - just turned five. In a blog post, published yesterday, Facebook executives discussed the accomplishments FAIR has made over the last five years, and where it might be heading in the future. The team was formed with an aim to advance state-of-the-art AI via open research. FAIR has grown since its inception and now has labs in the USA and Europe. Their team has worked broadly with the open-source community and some of their papers have received awards. A significant part of FAIR research is around the keys to reasoning, prediction, planning, and unsupervised learning. These areas of investigation, in turn, require a better theoretical understanding of various fields related to artificial intelligence. They believe that long-term research explorations are necessary to unlock the full potential of artificial intelligence. Important milestones achieved by the FAIR team Memory networks FAIR developed a new class of machine learning models that can overcome the limitations in neural networks, i.e. long-term memory. These new models can remember previous interactions to answer general knowledge questions while keeping previous statements of a conversation in context. Self-supervised learning and generative models FAIR was fascinated by a new unsupervised learning method, Generative Adversarial Networks (GANs) in 2014 proposed by researchers from MILA at Université de Montréal. From 2015, FAIR published a series of papers that showcased the practicality of GANs. FAIR researchers and Facebook engineers have used adversarial training methods for a variety of applications, including long-term video prediction and creating graphic designs in fashion pieces. A scalable Text classification In 2016 FAIR built fastText, a framework for rapid text classification and learning word representations. In a 2017 paper, FAIR proposed a model that assigns vectors to “subword units” (sequences of 3 or 4 characters) rather than to whole words. This allowed the system to create representations for words that were not present in training data. This resulted in a model which could classify billions of words by learning from untrained words. Also, FastText is now available in 157 languages. Translation research FAIR developed a CNN-based neural machine translation architecture and published a paper on it in 2017. ‘Multi-hop’ CNNs are easier to train on limited data sets and can also better understand misspelled or abbreviated words; they’re designed to mimic the way humans translate sentences, by taking multiple glimpses at the sentence they are trying to translate. The results were a 9x increase in speed over RNNs while maintaining great accuracy. AI tools In 2015, the FAIR community open-sourced Torch deep learning modules to speed up training of larger neural nets. Torchnet was released in 2016 to build effective and reusable learning systems. Further,, they released Caffe2, a modular deep learning framework for mobile computing. After that, they collaborated with Microsoft and Amazon to launch ONNX, a common representation for neural networks. ONNX makes it simple to move between frameworks. A new benchmark for computer vision In 2017, FAIR researchers won the ‘International Conference on Computer Vision Best Paper’ for Mask R-CNN, which combines object detection with semantic segmentation. The paper stated: “Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners.” Faster training and bigger data sets for Image Recognition Facebook’s Applied Machine Learning (AML) team discussed how they trained image recognition networks on large sets of public images with hashtags. The biggest dataset included 3.5 billion images and 17,000 hashtags. This was a breakthrough made possible by FAIR’s research on training speed. FAIR was able to train ImageNet, an order of a magnitude faster than the previous best. According to FAIR, “Our ultimate goal was to understand intelligence, to discover its fundamental principles, and to make machines significantly more intelligent” They continue to expand their research efforts into various areas such as developing machines that can acquire models of the real world with self-supervised learning, training machines to reason, and to plan, conceive complex sequences of actions. This is the reason why the community is also working on robotics, visual reasoning, and dialogue systems. Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset as part of their FastMRI project The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK?
Read more
  • 0
  • 0
  • 4022
article-image-british-parliament-publishes-confidential-facebook-documents-that-underscore-the-growth-at-any-cost-culture-at-facebook
Melisha Dsouza
06 Dec 2018
7 min read
Save for later

British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook

Melisha Dsouza
06 Dec 2018
7 min read
“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents” -Damian Collins, member of Parliament and committee chair It's been a year full of controversies for Facebook. Just a month after the New York Times published a report on the tech giant’s questionable leadership morales, a UK parliamentary committee published 250 pages’ of Facebook internal documents, including e-mails sent between CEO Mark Zuckerberg and other senior executives. The documents published yesterday, threw light on how Mark Zuckerberg and other executives tired to monetize their valuable user data, allowing apps to use Facebook to grow their network- as long as it increased usage of Facebook, strict limits on possible competitor access and much more. The files were seized by UK authorities just over a week ago to assist the investigation of the Cambridge Analytica scandal. Damian Collins, the chair of the select committee on culture, media, and sport, invoked Parliament’s summoning rights to force Ted Kramer, founder of the US software firm Six4Three, to release the documents. Kramer has been involved in a legal battle with Facebook since 2015 over developer access to user data. Kramer was addressed by a security representative at his hotel and was given a two-hour deadline to give the papers up. When Kramer failed to do so, he was escorted to Parliament and he proceeded to hand the documents over. The documents were believed to contain details on Facebook’s data and privacy controls that led to the Cambridge Analytica scandal, including e-mails between senior executives including CEO Mark Zuckerberg. “I believe there is considerable public interest in releasing these documents. They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market.” -Damian Collins Some highlights from the documents The alarmingly casual tone of Mark Zuckerberg's reply ‘“Yup, go for it.” on the suggestion by an engineer that the better way of going about the possible competitive threat Twitter’s “Vine” could cause Facebook, would be to cut off Vine’s access to Facebook data. Facebook found ways to access users’ call history in order to make “People You May Know” suggestions and tweak news-feed rankings. This was done without alerting users about the decision. One of the documents points out how Zuckerberg personally reviewed a list of apps from strategic competitors who were not allowed to use Facebook’s advertising services or services for applications “without Mark level sign-off.’ Facebook used Onavo (an Israeli analytics company) to check customers’ usage of mobile apps, again, without their knowledge. The analytics showed the company how many people had downloaded apps and how often they used them. This information was used to understand if a potential company can be acquired or considered as a threat. Here is Facebook’s  ‘Industry update’ presentation based on Onavo data, 26 April 2013 that shows the market reach of popular media apps.   Mark Zuckerberg encouraged “full reciprocity” between Facebook and app developers: The email said, “you share all your data on users with us, and we’ll share all of ours with you”. Facebook “whitelisted” certain companies, (it is unknown on what basis this list was made). These companies, including  Airbnb, Netflix, and Badoo; which had full access to users’ friends’ data after platform changes in 2014-15. The documents also highlights that in a 2012 email, Zuckerberg suggested making Facebook login and posting content on the platform free and charge “a lot of money” to read user data,  from the network. According to Facebook, as of today, that proposal was never implemented. However, executives also seemed concerned that enabling Facebook logins and data access for potentially competing platforms could ultimately affect user activity on Facebook itself. On of Zuckerberg’s emails of 2012 stated, “Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it. However, that may be good for the world but it’s not good for us unless people also share back to Facebook and that content increases the value of our network.” Mark Zuckerberg’s reply to the leaked emails In a Facebook post on Wednesday, Mark Zuckerberg responded to these publicly released documents. In a way, his post seems to, once again, deflect reader attention from the matter at hand and focus on explanations that do not address the concerns arising from the document leak. He claims that the company limited access to data to “prevent abusive apps” starting in 2014. This was done to prevent sketchy apps like the quiz app that sold data to Cambridge Analytica from operating on Facebook’s platform. He further added that limited data extensions were given to particular developers and that whitelists of developers allowed to use certain features are commonly used in beta testing.  “In some situations, when necessary, we allowed developers to access a list of the users’ friends,” according to Facebook. In a later statement emailed to Fast Company, the company mentioned that some of the documents, which were originally turned over in a California lawsuit, could be misleading and don’t necessarily reflect actual company practices. “As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context,” a spokesperson wrote. “We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.” These constant controversies have put the workforce at Facebook ill at ease. In an anonymous interview to Buzzfeed, two former employees said the “spate of negative reports has cast a shadow over the company in recent weeks”. The report also mentions how current and former employees sense “tense and, at times, hostile atmosphere inside the company”, one in which both senior employees and even staunch loyalists are contemplating their futures. According to Bloomberg, Kramer was ordered by a judge on Friday to surrender his laptop to a forensic expert after admitting he turned over the documents to the British lawmakers, in violation of a U.S. court order. Facebook wants the laptop to be evaluated to determine what happened in the U.K., to what extent the court order was breached, and how much of its confidential information has been divulged to the committee. On a side note, Facebook Inc.'s board of directors supported Sheryl Sandberg and said it was "entirely appropriate" for her as the COO, to ask if George Soros had shorted the company's stock after he called the social-media giant a "menace." The board's letter was sent by Facebook's general counsel Colin Stretch to Patrick Gaspard, president of Mr. Soros's Open Society Foundations, earlier Wednesday. The letter stated that "To be clear, Ms. Sandberg's question was entirely appropriate given her role as COO. "When a well-known and outspoken investor attacks your company publicly, it is fair and appropriate to do this level of diligence." Head over to the UK parliament committee’s official post to read the full 250 page Facebook documents. Facebook’s outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more Ex-Facebook manager says Facebook has a “black people problem” and suggests ways to improve Outage plagues Facebook, Instagram and Whatsapp ahead of Black Friday Sale, throwing users and businesses into panic
Read more
  • 0
  • 0
  • 2143

article-image-google-tvcs-write-an-open-letter-to-googles-ceo-demands-for-equal-benefits-and-treatment
Natasha Mathur
06 Dec 2018
4 min read
Save for later

Google TVCs write an open letter to Google's CEO; demands for equal benefits and treatment

Natasha Mathur
06 Dec 2018
4 min read
Google contractors ( often referred to as Google’s “shadow workforce”) wrote an open letter on Medium to Sundar Pichai, CEO, Google, yesterday, demanding him to address their demands of better conditions and equal benefits for contractors, that make up more than half of the company’s total staff. Contractors ( vendors, temps, TVCs) are workers who are employed by different outside agencies within Google for all types of different jobs (coders, managers, marketers, janitors, waiters, etc).   https://twitter.com/GoogleWalkout/status/1070327480601509888 It was just last month when 20,000 Google employees along with TVCs, temps, vendors, and contractors walked out to protest against Google’s handling of sexual harassment and discrimination within the workplace. As a part of the walkout, Google employees had made five demands urging Google to bring about structural changes within the workplace. One of the demands laid out by Google employees was “commitment to ending pay and opportunity inequity” for all levels of the organization, including the contract workers and the sub-contract workers. However, Google didn’t address any of the issues surrounding the TVCs so far. “As TVCs who took equal part in the walkout, your silence has been deafening. Google routinely denies TVCs access to information that is relevant to our jobs and our lives,” reads the letter. An example mentioned in the letter is of the tragic shooting at YouTube headquarters in April, this year, where Google sent security related updates to its employees in real time, “leaving TVCs defenseless in the line of fire”. Moreover, TVCs were not even invited to the post-shooting town hall meeting the following day. Similarly, TVCs were also excluded from the town hall meeting that was conducted six days post walkout. “The exclusion of TVCs from important communications and fair treatment is part of a system of institutional racism, sexism, and discrimination. TVCs are disproportionately people from marginalized groups who are treated as less deserving of compensation, opportunities, workplace protections, and respect”, reads the letter. The letter also points out the fact that contractors wear different colored badges from full-time employees, get low wages despite doing the same work as full-time employees, and are offered minimal benefits as compared to full-time employees. “Google has the power — and the money — to ensure that we are treated equitably, with respect and dignity. However, it is clear that we will continue to be mistreated and ignored if we stay silent. We need transparency, accountability, and structural change to ensure equity for all Google workers, ”reads the letter. Contractors have now reiterated the demands of the walkout: End to pay and opportunity inequity for TVCs. This demand includes better pay and same benefits for contractors as full-time employees such as high-quality healthcare, paid vacations, paid sick days, holiday pay, family leave, and bonuses.  There should also be a consistent and transparent conversion process to full-time employment, along with adopting single badge color for all workers. Access to company-wide information on the same terms as full-time employees. This includes access to town hall discussions, communications about safety, discrimination, sexual misconduct, access to internal forums like Google Groups, career growth, classes, and counseling opportunities, similar to the ones offered to full-time employees. Public response to the letter has been largely positive, with people supporting contractors for speaking out: https://twitter.com/andytliu/status/1070504767674245121 https://twitter.com/mer__edith/status/1070345492406644737 https://twitter.com/ireneista/status/1070375529650372608 https://twitter.com/techworkersco/status/1070337882714365952 https://twitter.com/spoonboy42/status/1070479331196059648 Google hasn’t responded yet regarding the demands and for now, we can only wait and see if and when these demands get addressed by Google. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept Google employees join hands with Amnesty International urging Google to drop Project Dragonfly
Read more
  • 0
  • 0
  • 2398