Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-researchers-show-that-randomly-initialized-gradient-descent-can-achieve-zero-training-loss-in-deep-learning
Bhagyashree R
13 Nov 2018
2 min read
Save for later

Researchers show that randomly initialized gradient descent can achieve zero training loss in deep learning

Bhagyashree R
13 Nov 2018
2 min read
Yesterday, researchers from Carnegie Mellon University, University of Southern California, Peking University, and Massachusetts Institute of Technology published a paper on a big optimization problem in deep learning. This study proves that randomly initialized gradient descent can achieve zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). The key idea is to show that the Gram matrix is increasingly stable under overparameterization, and so every step of gradient descent decreases the loss at a geometric rate. What is this study is based on? This study builds on two ideas from previous works on gradient descent for two-layer neural networks: The researchers analyzed the dynamics of the predictions whose convergence is determined by the least eigenvalue of the Gram matrix induced by the neural network architecture. And to lower bound the least eigenvalue, it is sufficient to bound the distance of each weight matrix from its initialization. The second base concept is the observation by Li and Liang, which states that if the neural network is overparameterized, every weight matrix is close to its initialization. What are the key observations made in this study? This study focuses on the least squares loss and assumes the activation is Lipschitz and smooth. Consider that there are n data points and the neural network has H layers with width m. The following are the aims this study tries to prove: Fully-connected feedforward network: If m = Ω poly(n)2O(H)1, then randomly initialized gradient descent converges to zero training loss at a linear rate. ResNet architecture: If m = Ω (poly(n, H)), then randomly initialized gradient descent converges to zero training loss at a linear rate. When compared with the first result, the dependence on the number of layers improves exponentially for ResNet. This theory demonstrates the advantage of using residual connections. Convolutional ResNet: The same technique is used to analyze the convolutional ResNet. If m = poly(n, p, H) where p is the number of patches, then randomly initialized gradient descent achieves zero training loss. To learn more, you can, read the full paper: Gradient Descent Finds Global Minima of Deep Neural Networks. OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners Facebook open sources QNNPACK, a library for optimized mobile deep learning Top 5 Deep Learning Architectures
Read more
  • 0
  • 0
  • 2172

article-image-uk-researchers-have-developed-a-new-pytorch-framework-for-preserving-privacy-in-deep-learning
Prasad Ramesh
13 Nov 2018
3 min read
Save for later

UK researchers have developed a new PyTorch framework for preserving privacy in deep learning

Prasad Ramesh
13 Nov 2018
3 min read
UK professors and researchers have developed the first general framework for safeguarding privacy in deep learning built over Pytorch. They have reported their findings in the paper “A generic framework for privacy preserving deep learning.” Using constructs that preserve privacy This paper introduces a transparent framework to preserve privacy while using deep learning in PyTorch. This framework puts a premium on data ownership and its securing processing. It introduces a value representation which is based on chains of commands and tensors. The resulting abstraction allows implementation of complex constructs that preserve privacy. Constructs like federated learning, secure multiparty computation, and differential privacy are used. Boston Housing and Pima Indian Diabetes datasets are used in the paper to show early results. Except for differential privacy, other privacy features do not affect prediction accuracy. The current framework implementation introduces a significant overhead which is to be addressed in a later development stage. Deep learning operations in untrusted environments To perform operations in untrusted environments without disclosing data, Secure Multiparty Computation (SMPC) is used, which is a popular approach. In machine learning, SMPC can protect the model weights while allowing multiple worker nodes to participate in training with their own datasets. This is known as federated learning (FL). These securely trained models are still vulnerable to reverse engineering attacks. This vulnerability is addressed by differentially private (DP) methods. The standardized PyTorch framework contains: A chain structure in which performing transformations or sending tensors to other workers can be shown as a chain of operations. For a virtual to real context of federated learning, a concept called Virtual Workers is introduced. These Virtual Workers reside in the same machine and do not communicate over the network. Results and conclusion A reasonably small overhead is observed when using Web Socket workers in place of Virtual Workers. This overhead is due to the low network latency when communication takes place between different local tabs. When using the Pima Indian Diabetes dataset, the same overhead in performance is observed. The design in this paper relies on chains of tensors exchanged between the local and remote workers. Decreasing training time is an issue to be addressed. Another concern is securing MPC to avoid malicious attempts targeted at corrupting the data or the model. For more details, read the research paper. PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning?
Read more
  • 0
  • 0
  • 3390

article-image-mozilla-shares-how-av1-the-new-the-open-source-royalty-free-video-codec-works
Bhagyashree R
12 Nov 2018
5 min read
Save for later

Mozilla shares how AV1, the new open source royalty-free video codec, works

Bhagyashree R
12 Nov 2018
5 min read
Last month, Nathan Egge, a Senior Research Engineer at Mozilla explained technical details behind AV1 in depth at the Mile High Video Workshop in Denver. AV1 is a new open source royalty-free video codec that promises to help companies and individuals transmit high-quality video over the internet efficiently. AV1 is developed by the Alliance for Open Media (AOMedia), an association of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015. Mozilla joined AOMedia as a founding member. AV1 was created for a broad set of industry use cases such as video on demand/streaming, video conferencing, screen sharing, video game streaming, and broadcast. It is widely supported and adopted and gives at least 30% better than current generation video codecs. The alliance was able to hit a key milestone with the release of AV1 1.0.0 specification earlier this year in June. The codec has seen increasing interest from various companies, for instance, YouTube launched AV1 Beta Playlist in September. The following diagram shows the various stages in the working of a video codec: Source: YouTube We will cover the tools and algorithm used in some of these stages. Let’s see some of its technical details from Egge’s talk: AV1 Profiles Profiles specify the bit depth and subsampling formats supported. In AV1 there are three profiles: Main, High, and Professional which differ in terms of their bit-depth and chroma subsampling. The following table shows their bit-depth and chroma subsampling: Main High Professional Bit depth 8-bit and 10-bit 8-bit and 10-bit 8-bit, 10-bit, and 12-bit Chroma subsampling 4:0:0, 4:2:0 4:0:0, 4:2:0, and 4:4:4 4:0:0, 4:2:0, 4:2:2, and 4:4:4 High-level syntax In VP9 there is a concept of superframes that at some point becomes complicated. Superframes allows you to consolidate multiple coded frames into one single chunk. AV1 comes with high-level syntax that includes: sequence header, frame header, tile group, and tiles. Sequence header starts a video stream, frame headers are at the beginning of a frame, a tile group is an independent group of tiles, and finally, we have tiles which can be independently decoded. Source: YouTube Multi-symbol entropy coder Unlike VP9, which uses a tree-based boolean non-adaptive binary arithmetic encoder to encode all syntax elements, AV1 uses a symbol-to-symbol adaptive multi-symbol arithmetic coder. Each of its syntax element is a member of a specific alphabet of N elements, and a context is a set of N probabilities together with a count to facilitate fast early adaptation. Transform types In addition to DCT and ADST transform types, AV1 introduces two other transforms called flipped ADST and identity transform as extended transform types. Identity transform enables you to effectively code residual blocks with edges and lines. AV1 thus comes with the advantage of a total of sixteen horizontal and vertical transform type combinations. Intra prediction modes Along with the 8 main directional modes from VP9, up to 56 more directions are added but not all of them are available at smaller sizes. The following are some of the prediction modes introduced in AV1: Smooth H + V modes allow you to smoothly interpolate between values in the left column and last value in the above row. Palette mode is introduced to the intra coder as a general extra coding tool. It will be especially useful for artificial videos like screen capture and games, where blocks can be approximated by a small number of unique colors. The palette predictor for each plane of a block is depicted by: A color palette, with 2 to 8 colors Color indices for all pixels in the block Chroma from Luma (CfL) is a chroma-only intra predictor that models chroma pixels as a linear function of coincident reconstructed luma pixels. Source: YouTube First, the reconstructed luma pixels are subsampled into the chroma resolution, and then the DC component is removed to form the AC contribution. In order to approximate chroma AC component from the AC contribution, instead of requiring the decoder to imply scaling parameters, CfL determines the parameters based on the original chroma pixels and signals them in the bitstream. As a result, this reduces decoder complexity and yields more precise predictions. As for the DC prediction, it is computed using intra DC mode, which is sufficient for most chroma content and has mature fast implementations. Constrained Directional Enhancement Filter (CDEF) CDEF is a detail-preserving deringing filter, which is designed to be applied after deblocking. It works by estimating edge directions followed by applying a non-separable non-linear low-pass directional filter of size 5×5 with 12 non-zero weights. In order to avoid extra signaling, the decoder uses a normative fast search algorithm to compute the direction per 8×8 block that minimizes the quadratic error from a perfect directional pattern. Film Grain Synthesis In AV1, film grain synthesis is a normative post-processing applied outside of the encoding/decoding loop. Film grain is abundant in TV and movie content, which needs to be preserved while encoding. But, its random nature makes it difficult to compress with traditional coding tools. In film grain synthesis, the grain is removed from the content before compression, its parameters are estimated and then sent in the AV1 bitstream. The grain is then synthesized based on the received parameters and added to the reconstructed video. For grainy content, film grain synthesis significantly reduces the bitrate necessary to reconstruct the grain with sufficient quality. You can watch Into the Depths The Technical Details behind AV1 by Nathan Egge on YouTube: https://www.youtube.com/watch?v=On9VOnIBSEs&t=463s Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available
Read more
  • 0
  • 0
  • 6119
Visually different images

article-image-the-dea-and-ice-reportedly-plan-to-turn-streetlights-to-covert-surveillance-cameras-says-quartz-report
Bhagyashree R
12 Nov 2018
3 min read
Save for later

The DEA and ICE reportedly plan to turn streetlights to covert surveillance cameras, says Quartz report

Bhagyashree R
12 Nov 2018
3 min read
According to some federal contracting documents, the US Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE) are placing an undisclosed number of covert surveillance cameras, reported Quartz on Saturday. The Federal Procurement Data System shows that DEA has paid roughly $22,000 to a company named Cowboy Streetlight Concealments LLC for “video recording and reproducing equipment” since June this year. The recent acquisitions from Cowboy Streetlight Cocealmets have been funded by ICE offices in Dallas, Houston, and San Antonio. In addition to streetlights, these surveillance cameras are also placed inside traffic barrels. Christie Crawford, who owns Cowboy Streetlight Concealments with her husband told Quartz that she can’t reveal the details about the company’s federal contracts: “We do streetlight concealments and camera enclosures. Basically, there are businesses out there that will build concealments for the government and that’s what we do. They specify what’s best for them, and we make it. And that’s about all I can probably say.” But she did add that: “I can tell you this—things are always being watched. It doesn’t matter if you’re driving down the street or visiting a friend, if government or law enforcement has a reason to set up surveillance, there’s great technology out there to do it.” Last week, a solicitation was issued by DEA for “concealments made to house network PTZ camera, cellular modem, cellular compression device”. The solicitation shows that they intended to give the contract to Obsidian Integration LLC  an Oregon company with a sizable number of federal law enforcement customers. Chad Marlow, a senior advocacy and policy counsel for the American Civil Liberties Union, told Quartz that efforts to put cameras in street lights have been proposed before by local law enforcement as part of a “smart” LED streetlight system: “It basically has the ability to turn every streetlight into a surveillance device, which is very Orwellian to say the least. In most jurisdictions, the local police or department of public works are authorized to make these decisions unilaterally and in secret. There’s no public debate or oversight.” To read the full report, head over to the Quartz website. Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Four IBM facial recognition patents in 2018, we found intriguing
Read more
  • 0
  • 0
  • 2021

article-image-the-us-air-force-lays-groundwork-towards-artificial-general-intelligence-based-on-hierarchical-model-of-intelligence
Prasad Ramesh
12 Nov 2018
3 min read
Save for later

The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence

Prasad Ramesh
12 Nov 2018
3 min read
In a paper, published last week, a member from the US Air Force talks about a model for artificial general intelligence (AGI). The author of the paper, A Model for General Intelligence, is Paul Yaworsky, Information Directorate of the US Air Force Research Laboratory. There have been many efforts in the past to model intelligence in machines, but with little progress in terms of real cognitive intelligence like those of humans. What is artificial general intelligence? Currently, the way AI systems work is not understood completely. Also, AI systems are good at performing narrow tasks but not complex cognitive problems. Artificial general intelligence aims to covers the gap between lower level and higher level work in AI—to try and make sense of the abstract general nature of intelligence. Three basic aspects of artificial intelligence need to be understood to bridge this gap. To realize the general order and the nature of intelligence at a high level. Understand what these realizations mean with respect to the overall intelligence process. Describe these realizations as clearly as possible. The paper proposes a hierarchical model to help capture and exploit the order within intelligence. The underlying order contains patterns of signals that become organized, stored and then activated in space and time. The hierarchical model The paper portrays intelligence as an orderly, organized process using a simple hierarchy as shown: Source: A Model for General Intelligence The real world has order and organization. The human brain understands this and forms an internal model based on that understanding. This model enables learning, which further gives way to decision making, movement, and communication. The flow of input signals and learning within the shown model is bottom-up which is in contrast to the top-down of learned signal representations. The paper says that external order and organization can be modeled internally in the brain in the form of various hierarchies. The hierarchies discussed are temporal, spatial, and general. Impact and concerns When computers continue to improve and cooperation increases between humans and computers, people themselves will become more productive in information processing. A point to remember as the paper states is that computers work for humans. Yaworsky also talks about concerns with AI and it taking over the world. Problems like those are heard today due to sketchy predictions involving intelligence he says. It is difficult to make good scientific predictions in itself but when the predictions have to be done in intelligence it is almost impossible to get them right. This is because a proper understanding of intelligence itself is not good enough to be able to make accurate predictions. Do you buy this explanation or fear the US Air Force working on killer drones that may one day go rampant like in Terminator 2?! Either way, the conclusion is that intelligence involves multiple levels of abstraction. Human intelligence has high processing levels—abstract, general, etc. Majority of the work done in AI currently is at lower levels of abstraction. There is a lot needed for the current AI to become real AI. A high-level hierarchical model for artificial intelligence as explored in the paper addresses this. For more details, you can read the research paper. The ethical dilemmas developers working on Artificial Intelligence products must consider Technical and hidden debts in machine learning – Google engineers’ give their perspective Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 4194

article-image-alibabas-singles-day-sale-hit-record-30-billion-in-24-hours
Amrata Joshi
12 Nov 2018
3 min read
Save for later

Alibaba’s Singles Day sale hit record $30 billion in 24 hours

Amrata Joshi
12 Nov 2018
3 min read
Yesterday, Alibaba’s Singles Day sale racked up $30 billion just in a span of 24 hours. Alibaba recorded about $1billion in 85 seconds and then $10 billion sales in the first hour past midnight. Singles' Day, also known as Double 11, is the world's biggest online sales event which managed to outpace the sales of U.S. shopping holidays Black Friday and Cyber Monday. The idea behind this event comes from the Singles Day, which is an informal holiday celebrated in China by the people who are not in relationships. The date November 11 is observed as Singles Day in China. In 2009, Alibaba started offering Singles Day discounts and since then has turned the day into a 24-hour bonanza of online shopping in China. Alibaba’s Southeast Asia subsidiary Lazada also offers Singles Day discounts in Singapore, Malaysia, Indonesia, Thailand, and Vietnam. This year the top three products in the early sale were Apple, Dyson and Xiaomi. In the first hour, the countries selling to China were Japan, US and South Korea. A glimpse of consumer sentiment was seen because of the tensions between US and China. Daniel Zhang, Alibaba CEO told reporters in Shanghai, “We can feel that merchants are fully embracing the internet and helping with consumption upgrade.” The China-US trade war likely to hamper Alibaba growth story China's economy is getting affected because of its trade war with the United States getting worse. According to BBC news, the US has imposed three rounds of tariffs on Chinese products this year, totaling $250 billion worth of goods. This war is going beyond and in future, if the US decides to work on the idea of import taxes, then all the China's exports to the US would be subject to duties. This year, China’s quarterly growth was the weakest marking just 6.5%. It’s already indicating a downfall. CNN reported in September 2018 that Jack Ma, the founder and executive chairman of Alibaba, said "It's going to last long, it's going to be a mess. The US-China trade war will last not for 20 months or 20 days, but "maybe 20 years." Despite the booming record, the annual sales growth rate fell from 39 percent to 27 percent in a year . It was the smallest in the event's 10-year history. It would be interesting to see Alibaba’s next move. To have a look at the highlights from the Singles Day, check out the official video. And to know more about this news, check CNN’s official website. Alibaba launches an AI chip company named ‘Ping-Tou-Ge’ to boost China’s semiconductor industry OpenSky is now a part of the Alibaba family Alibaba introduces AI copywriter
Read more
  • 0
  • 0
  • 1739
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-following-google-facebook-changes-its-forced-arbitration-policy-for-sexual-harassment-claims
Natasha Mathur
12 Nov 2018
3 min read
Save for later

Following Google, Facebook changes its forced arbitration policy for sexual harassment claims

Natasha Mathur
12 Nov 2018
3 min read
It was last Thursday when Google changed its policy of forced arbitration in case of sexual harassment. A day after that, Facebook announced that it is also changing its policy of forced arbitration that requires employees to settle sexual harassment claims in private, as per the Wall Street Journal. This means that employees can now take any of their sexual harassment complaints to a court of law. Following Google’s footsteps, Facebook has also made its arbitration policy optional. Anthony Harrison, corporate media relations director, Facebook, confirmed that Facebook has made arbitration optional. “Today, we are publishing our updated Workplace Relationships policy and amending our arbitration agreements to make arbitration a choice rather than a requirement in sexual harassment claims. Sexual harassment is something that we take very seriously, and there is no place for it at Facebook”, said Harrison. Moreover, Facebook also announced an updated “Relationships at work policy”. As per the updated policy, anyone who starts a relationship with anyone in the management chain must disclose it to the HR. Additionally, if you are a Director or above and you get into a relationship with someone at the company, it must be reported to the HR. Google decided to modify its policy after 20,000 Google employees along with Temps, Vendors, and Contractors walked out earlier this month to protest against the discrimination, racism, and sexual harassment present at Google’s workplace. Google made the arbitration process optional for individual sexual harassment and sexual assault claims. “Google has never required confidentiality in the arbitration process and it still may be the best path for a number of reasons (e.g. personal privacy), but, we recognize that the choice should be up to you”, mentioned Sundar Pichai, Google CEO, on the announcement page. Facebook is the latest tech company who has decided to change the forced arbitration policy. Other companies such as Uber and Microsoft have changed their arbitration policy in the recent past. Uber made arbitration optional, back in May, to bring “transparency, integrity, and accountability” to its handling of sexual harassment. Microsoft was one of the first major organizations who decided to completely eliminate forced arbitration clauses for sexual harassment, last December. It seems like Google Walkout managed to not only push Google to take a stand against sexual assault but also inspired other companies to take the right steps in case of sensitive issues. Facebook’s big music foray: New soundtracking feature for stories and its experiments with video music and live streaming karaoke Facebook is at it again. This time with Candidate Info where politicians can pitch on camera
Read more
  • 0
  • 0
  • 2035

article-image-free-software-foundation-updates-their-licensing-materials-adds-commons-clause-and-fraunhofer-fdk-aac-license
Sugandha Lahoti
12 Nov 2018
3 min read
Save for later

Free Software Foundation updates their licensing materials, adds Commons Clause and Fraunhofer FDK AAC license

Sugandha Lahoti
12 Nov 2018
3 min read
Last week, the free software foundation updated the list of their licensing materials. They have added two licenses to their list of various Licenses including the Commons Clause, and the Fraunhofer FDK AAC license. They have also updated their article on license compatibility and relicensing and added a new entry to the frequently asked questions about the GNU Licenses. Commons Clause Commons Clause is added to their list of non-free licenses. This license is added to an existing free license to prevent using the work commercially, rendering the work nonfree. By making commons clause as non-free, FSF recommends users to fork software using it. So, if a previously existing project that was under a free license adds the Commons Clause, users should work to fork that program and continue using it under the free license. If it isn't worth forking, users should simply avoid the package. This move by FSF sparked a controversy that the Commons Clause piggybacks on top of existing free software licenses and thus could mislead users to think that software using it is free software when it's, in fact, proprietary by their definitions. However, others found the combination of a free software license + Commons Clause to be very compelling. A hacker news user pointed out, “I'm willing to grant to the user every right offered by free software licenses with the exception of rights to commercial use. If that means my software has to be labeled as proprietary by the FSF, so be it, but at the same time I'd prefer not to mislead users into thinking my software is being offered under a vanilla free software license.” Another said, “I don't know there is any controversy as such. The FSF is doing its job and reminding everyone that freedom includes the freedom to make money. If your software is licensed under something that includes the Commons Clause then it isn't free software, because users are not free to do what they want with it.” The Fraunhofer FDK AAC license FSF has also added the Fraunhofer FDK AAC license to their list of licenses. This is a free license, incompatible with any version of the GNU General Public License (GNU GPL). However, it comes with an advice of caution. While the Fraunhofer provides a copyright license, it explicitly declines to grant any patent license. In fact, it directs users to contact them to obtain a patent license. Users should act with caution in determining whether they feel comfortable using works under this license. Other changes FSF has also added a new section to their article on License Compatibility and Relicensing, addressing combinations of code. This section was announced in September and helps users in simplifying the picture when dealing with a project that combines code under multiple compatible licenses. They have also added a new entry to their FAQs. It explains what the GNU GPL says about translating code into another programming language. Read more about the news on FSF Blog. Is the ‘commons clause’ a threat to open source? GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation Lerna relicenses to ban major tech giants like Amazon, Microsoft, Palantir from using its software as a protest against ICE
Read more
  • 0
  • 0
  • 3078

article-image-openai-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners
Prasad Ramesh
09 Nov 2018
3 min read
Save for later

OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners

Prasad Ramesh
09 Nov 2018
3 min read
OpenAI released Spinning Up yesterday. It is an educational resource for anyone who wants to become a skilled deep learning practitioner. Spinning Up has many examples in reinforcement learning, documentation, and tutorials. The inspiration to build Spinning Up comes from OpenAI Scholars and Fellows initiatives. They observed that it’s possible for people with little-to-no experience in machine learning to rapidly become practitioners with the right guidance and resources. Spinning Up in Deep RL is also integrated into the curriculum for OpenAI 2019 cohorts of Scholars and Fellows. A quick overview of Spinning Up course content A short introduction to reinforcement learning. What is it? The terminology used, different types of algorithms and basic theory to develop an understanding. An essay that lays out points and requirements to grow into a reinforcement learning research role. It explains the background, practice learning, and developing a project. A list of important research papers organized by topic for learning. A well-documented code repository of short, standalone implementations of various algorithms. These include Vanilla Policy Gradient (VPG), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC). And finally, a few exercises to solve and start applying what you’ve learned. Support plan for Spinning Up Fast-paced support period For the first three weeks after release OpenAI will quickly work on bug-fixes, installation issues, and resolving errors in the docs. They will work to streamline the user experience so that it as easy as possible to self-study with Spinning Up. A major review in April 2019 Around April next year, OpenAI will perform a serious review of the state of package based on feedback received from the community. After that any plans for future modification will be announced. Public release of internal development On making changes to Spinning Up in Deep RL with OpenAI Scholars and Fellows, the changes will also be pushed to the public repository so that it is available to everyone immediately. In Spinning Up, running deep reinforcement learning algorithms is as easy as: python -m spinup.run ppo --env CartPole-v1 --exp_name hello_world For more details on Spinning Up, visit the OpenAI Blog. This AI generated animation can dress like humans using deep reinforcement learning Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science
Read more
  • 0
  • 0
  • 4659

article-image-australias-facial-recognition-and-identity-system-can-have-chilling-effect-on-freedoms-of-political-discussion-the-right-to-protest-and-the-right-to-dissent-the-guardian-r
Bhagyashree R
09 Nov 2018
5 min read
Save for later

Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report

Bhagyashree R
09 Nov 2018
5 min read
On Wednesday, The Guardian reported that various civil rights groups and experts are warning that using the near real-time matching of citizens’ facial images risks a profound chilling effect on protest and dissent. The facial recognition system is capable of rapidly matching pictures of people captured on CCTV with their photos stored in government records to detect criminals and identity theft. What is this facial recognition and identity system? Last year in October, the Australian government agreed on establishing a National Facial Biometric Matching Capability and signed an Intergovernmental Agreement on Identity Matching Services. This system was aimed to make it easier for security and law enforcement agencies to identify suspects or victims of terrorism or other criminal activities and to combat identity crime. Under this agreement, agencies in all jurisdiction are allowed to use this new face matching service to access passport, visa, citizenship, and driver license images. The systems consist of two parts: Face Verification Service (FVS): This is a one-to-one, image-based verification service that matches a person’s photo against an image on one of the government records to help verify their identity. Face Identification Service (FIS): Unlike FVS, this is a one-to-many, image-based identification service that matches a photo of an unknown person against multiple government records to help to identify the person. What are some concerns the system poses? Since its introduction, the facial recognition and identity system has raised major concerns among academics, privacy experts, and civil rights groups. This system records and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offense. In a submission to the Parliamentary Joint Committee on Intelligence and Security, Professor Liz Campbell of Monash University points out that “the capability” breaches privacy rights. This system allows the collection, storage, and sharing of personal details from people who are not even suspected of an offense. According to Campbell, the facial recognition and identity system also prone to errors: "Research into identity matching technology indicates that ethnic minorities and women are misidentified at higher rates than the rest of the population.” On investigating FBI’s facial recognition and identity system, the US full house committee on oversight and government reform also found that the system has some inaccuracies: “Facial recognition technology has accuracy deficiencies, misidentifying female and African American individuals at a higher rate. Human verification is often insufficient as a backup and can allow for racial bias.” These inaccuracies are often because of the underlying algorithms, which are capable of identifying people who look more like its creators. For instance, in the British and Australian context, it is good at identifying white men. In addition to these inaccuracies, there are also concerns about the level of access given to private corporations and the legislation’s loose wording, which could allow it to be used for purposes other than combating criminal activities. Lesley Lynch, the deputy president of NSW Council for Civil Liberties believes that these systems will have an ill effect on our freedom of political discussion: “It’s hard to believe that it won’t lead to pressure, in the not too distant future, for this capability to be used in many contexts, and for many reasons. This brings with it a real threat to anonymity. But the more concerning dimension is the attendant chilling effect on freedoms of political discussion, the right to protest and the right to dissent. We think these potential implications should be of concern to us all.” What the supporters are saying? Despite these concerns, New South Wales is in favor of the capability and is legislating to allow state driver’s licenses to be shared with the commonwealth and investing $52.6m over four years to facilitate its rollout. Samantha Gavel, the NSW’s privacy commissioner said that the facial recognition and identity system has been designed with “robust” privacy safeguards. Gavel said that the system is developed in consultation with state and federal privacy commissioners, and she expressed confidence in the protections limiting access by private corporations: “I understand that entities will only have access to the system through participation agreements and that there are some significant restraints on private sector access to the system.” David Elliott, NSW Minister for Counter-Terrorism said that the system will help prevent identity theft and there will be a limit to its use. Mr. Elliott said in state parliament: "People will not be charged for jaywalking just because their facial biometric information has been matched by law enforcement agencies. The Government will make sure that members of the public who have a driver license are well and truly advised that this information and capability will be introduced as part of this legislation. I am an avid libertarian when it comes to freedom from government interference and [concerns] have been forecasted and addressed in this legislation." To read the full story head over to The Guardian’s official website. Google’s new facial recognition patent uses your social network to identify you! Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial]
Read more
  • 0
  • 0
  • 2587
article-image-google-introduces-ai-hub-kubeflow-pipeline-and-cloud-tpu-to-make-artificial-intelligence-more-accessible-to-businesses
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

Google introduces AI Hub, Kubeflow pipeline and Cloud TPU to make Artificial Intelligence more accessible to businesses

Melisha Dsouza
09 Nov 2018
4 min read
Google is taking yet another step to make its artificial intelligence technology more accessible across a range of industries. Yesterday, in a blog post, Google’s Director of product management for Cloud AI, Rajen Sheth introduced a host of tools to “put AI in reach of all businesses”. He stated that even though  the company has more than 15,000 paying customers using its AI services, it’s not enough. The upgrades will make AI simpler, useful and fast for increased adoption among businesses. Here are the tools released by Google: #1 The AI Hub to make AI simpler! Released in alpha, the AI Hub is a “one-stop destination for plug-and-play ML content” which includes pipelines, Jupyter notebooks, TensorFlow modules, and more. The AI Hub is launched with a motive to combat the scarcity of ML knowledge in the workforce. This will overcome the challenge for organizations to build comprehensive resources using their ML knowledge. It aims to make high-quality ML resources developed by Google Cloud AI, Google Research and other teams across Google publicly available to all businesses. The Hub will also provide a  private, secure hub where enterprises can upload and share ML resources within their own organizations. This will help businesses to reuse pipelines and deploy them to production in GCP or on hybrid infrastructures using the Kubeflow Pipeline system with just a few steps. In the beta release, Google plans to expand the type of assets made available through the AI Hub, which includes public contributions from third-party organizations and partners. #2 Kubeflow Pipelines, API updates for video to make AI useful Kubeflow Pipelines will enable organizations to build and package ML resources so that they’re as useful as possible to the broadest range of internal users. This new component of Kubeflow, packages ML code just like building an app so that it’s reusable to other users across an organization. It enables industries to: Compose, deploy and manage reusable end-to-end machine learning workflows Enables rapid and reliable experimentation, so users can try many ML techniques to identify what works best for their application. Kubeflow Pipelines will help users take advantage of Google’s TensorFlow Extended (TFX) open source libraries to address production ML issues such as model analysis, data validation, training-serving skew, data drift, and more. Google has also released three features in Cloud Video API (in beta) that address common challenges for businesses that work extensively with video. Videos will now be more readily searchable since text detection can now determine where and when text appears in a video. It supports more than 50 languages. Object Tracking can identify more than 500 classes of objects in a video. Speech Transcription for Video can transcribe audio. This will make it possible to easily create captioning and subtitles, as well as increasing the searchability of its contents. #3 Cloud TPU updates to make AI faster Google’s Tensor Processing Units (TPUs) are custom ASIC chips designed for machine learning workloads to dramatically accelerate ML tasks, and are easily accessed through the cloud. Since July, Google has been adding features to their Cloud TPU to make compute-intensive machine learning faster and more accessible to businesses worldwide. In response to these upgrades, Kaustubh Das, vice president, data center product management at Cisco stated, “ Cisco is also delighted to see the emergence of Kubeflow Pipeline that promises a radical simplification of ML workflows which are critical for mainstream adoption. We look forward to bringing the benefits of this technology alongside our world class AI/ML product portfolio to our customers.” Adding to this line of thoughts were NVIDIA and Intel as well. Head over to Google’s official blog for an entire coverage of this announcement. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment Google open sources BERT, an NLP pre-training technique Google AdaNet, a TensorFlow-based AutoML framework  
Read more
  • 0
  • 0
  • 2613

article-image-apache-spark-2-4-0-released
Amrata Joshi
09 Nov 2018
2 min read
Save for later

Apache Spark 2.4.0 released

Amrata Joshi
09 Nov 2018
2 min read
Last week, Apache Spark released its latest version, Apache Spark 2.4.0. It is the fifth release in the 2.x line. This release comes with Barrier Execution Mode for better integration with deep learning frameworks. Apache Spark 2.4.0 brings 30+ built-in and higher-order functions to deal with complex data types. These functions work with  Scala 2.12 and improve the K8s (Kubernetes) integration. This release also focuses on usability, stability, and polish while resolving around 1100 tickets. What’s new in Apache Spark 2.4.0? Built-in Avro data source Image data source Flexible streaming sinks Elimination of the 2GB block size limitation during transfer Pandas UDF improvements Major changes Apache Spark 2.4.0 supports Barrier Execution Mode in the scheduler, for better integration with deep learning frameworks. One can now build Spark with Scala 2.12 and write Spark applications in Scala 2.12. Apache Spark 2.4.0 supports Spark-Avro package with logical type support for better performance and usability. Some users are SQL experts but aren’t much aware of Scala/Python or R. Thus, this version of Apache comes with support for Pivot. Apache Spark 2.4.0 has added Structured Streaming ForeachWriter for Python. This lets users write ForeachWriter code in Python, that is, they can use the partitionId and the version/batchId/epochId to conditionally process rows. This new release has also introduced Spark data source for the image format. Users can now load images through the Spark source reader interface. Bug fixes: The LookupFunctions are used to check the same function name again and again. This version includes a latest LookupFunctions rule which performs a check for each invocation. A PageRank change in the Apache Spark 2.3 introduced a bug in the ParallelPersonalizedPageRank implementation. This change prevents serialization of a Map which needs to be broadcast to all workers. This issue has been resolved with the release of Apache Spark 2.4.0 Read more about Apache Spark 2.4.0 on the official website of Apache Spark. Building Recommendation System with Scala and Apache Spark [Tutorial] Apache Spark 2.3 now has native Kubernetes support! Implementing Apache Spark K-Means Clustering method on digital breath test data for road safety
Read more
  • 0
  • 0
  • 2605

article-image-scylladb-announces-scylla-3-0-a-nosql-database-surpassing-apache-cassandra-in-features
Prasad Ramesh
09 Nov 2018
2 min read
Save for later

ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features

Prasad Ramesh
09 Nov 2018
2 min read
ScyllaDB announced Scylla 3.0, a NoSQL database at the Scylla Summit 2018 this week. Scylla is written in C++ and now has 10x the throughput of Apache Cassandra. New features in Scylla 3.0 This release is a milestone for Scylla as it surpasses Apache Cassandra in features. Concurrent OLTP and OLAP support Scylla 3.0 enables its users to safely balance real-time operational workloads with big data analytical workloads all within a single database cluster. Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) have very different approaches to access data. OLTP encompasses many small and varied transactions. This includes mixed writes, updates, and reads which have a high sensitivity to latency. OLAP highlights on the throughput of broad scans spanning across datasets. With the addition of capabilities that isolate workloads, Scylla uniquely supports simultaneous OLTP and OLAP workloads while maintaining low latency and high throughput. Materialized views are production-ready Materialized views was an experimental feature for a long time in Scylla. It is now included in the production-ready versions. Materialized views are designed to enable automatic server-side table denormalization. One thing to note is that the Apache Cassandra community reverted materialized views from production-ready Cassandra to an experimental feature in 2017. Secondary indexes This is another feature that is now production-ready with the Scylla 3.0 release. These global secondary indexes can scale to any clusters of any size. This is unlike the local-indexing approach adopted by Apache Cassandra. Secondary indexes allow users to query data via non-primary key columns. Cassandra 3.x file format compatibility Scylla 3.0 includes support for Apache Cassandra 3.x compatible format (SSTable). This improves performance and reduces storage volume by three times. With a shared-nothing approach, Scylla has increased throughput and storage capacity 10x that of Apache Cassandra. Scylla Open Source 3.0 has a close-to-the-hardware design to use modern servers optimally. It is written from scratch in C++ for significant improvements in areas concerning throughput and latency. Scylla consistently achieves 99% tail latency of less than 1 millisecond. To know more about Scylla, visit the ScyllaDB website. Why MongoDB is the most popular NoSQL database today TimescaleDB 1.0 officially released PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation
Read more
  • 0
  • 0
  • 3766
article-image-black-swan-fame-darren-aronofsky-on-how-technologies-like-virtual-reality-and-artificial-intelligence-are-changing-storytelling
Bhagyashree R
08 Nov 2018
5 min read
Save for later

‘Black Swan’ fame, Darren Aronofsky, on how technologies like virtual reality and artificial intelligence are changing storytelling

Bhagyashree R
08 Nov 2018
5 min read
On Monday, at the Web Summit 2018, Darren Aronofsky in his interview with WIRED correspondent Lauren Goode, spoke about how virtual reality and artificial intelligence is giving filmmakers and writers the freedom of being more imaginative and enabling them to shape up their vision into reality. He is the director of many successful movies including Requiem for a Dream, The Wrestler, and Black Swan and one of his recent projects is based on VR called Spheres. It is a three-part virtual reality black hole series written by Eliza McNitt and produced by Darren Aronofsky's Protozoa Pictures. Aronofsky believes that combining storytelling and VR provides viewers a true emotional experience by taking them to a very convincing and different world. Here are some of the highlights from his interview: How virtual reality based storytelling is different from filmmaking? From a very long time people have been talking about VR replacing films, but it is not going to happen anytime soon. “It may replace how people decide to spend their time but they are two different art forms and most people who work in virtual reality and filmmaking are aware that trying to blend them will not work,” said Aronofsky. Aronofsky feels the experiences created by VR and films are very different. When you are watching a movie you not only watch the character but you also feel how the character is feeling because of empathy. Aronofsky remarks this is a great thing about filmmaking,“It is a great part of filmmaking that you can sit there and you can through close-up enter the subjective experience of the character who takes you on a journey where you are basically experiencing what the character is going through.” In virtual reality, on the other hand, very less character is involved. It is very experiential and instead of being transferred into another person's shoes you are much more yourself. How technology is affecting filmmaking in a better way? One of the biggest breakthroughs enabled by these technologies, according to Aronofsky, is allowing filmmakers to shape their ideas into exactly how they want. He points out that unlike the 70s and 80s, when there were only few “Gandalfs” like Spielberg and George Lucas who were using computers for creating experiences, now computers can be used by anybody to create amazing visual effects, animations, and much more. “Use of computers have unlocked the possibilities of what we can do and what type of stories we can tell,” he added. Technologies such as AI and VR has enabled filmmakers and writers to write and create extremely complicated sequences that otherwise would have taken several human hours. He says, “Machines has given many more ways of looking at the material.” Is there any dark side of using these technologies? Though technology is providing different ways of telling stories, there can be situations where its influence is too much. Aronofsky remarked that there are some filmmakers who have lost control over the use of technology in their films, which has resulted into “visual effects extravaganza”. The huge teams working on these projects focus more on visual effects instead of the storytelling part of filmmaking. But at the same time, there are some filmmakers who know exactly where to draw the line between virtual and reality, giving their audiences beautiful movies to enjoy. “But there are filmmakers like James Cameron who are in control of everything and creating a vision where every single shot is chosen in if it is in virtual setting or in a real setting”, says the moviemaker. On the question of whether AI could replace humans in filmmaking or storytelling, he feels that current technologies are not mature enough to be able to actually understand what the character is feeling. He says, “It’s a terrifying thought… When jokes and humor and stories start to be able to reproduced where you can’t tell the difference between them and the human counterparts is a strange moment… Storytelling is a tricky thing and I am going to be a bit of a Luddite now and put my faith in the constant invention of individuals to do something that a computer won’t.” Does data influences a filmmaker’s decisions? Nowadays every decision is data-driven. Online streaming services tracks each click and swipe to understand user preferences. But, Aronofsky believes that you cannot predict the future even if you have access to so much data. Maybe the popularity of the actors or the locations can help but currently we do not have a fixed formula to predict how much success a film will see. Technologies like AI and VR are helping filmmakers to create visual effects, helping them in digital editing, and all in all have enabled them to put no limits on their imagination. Watch Darren Aronofsky's full talk at Web Summit 2018: https://youtu.be/lkzNZKCxMKc Tim Berners-Lee is on a mission to save the web he invented Web Summit 2018: day 2 highlights UN on Web Summit 2018: How we can create a safe and beneficial digital future for all
Read more
  • 0
  • 0
  • 2062

article-image-github-now-supports-the-gnu-general-public-license-gpl-cooperation-commitment-as-a-way-of-promoting-effective-software-regulation
Savia Lobo
08 Nov 2018
3 min read
Save for later

GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation

Savia Lobo
08 Nov 2018
3 min read
Yesterday, GitHub announced that it now supports the GPL Cooperation Commitment with 40 other software companies because it aligns with GitHub’s core values. According to the GitHub post, by supporting this change, GitHub “hopes that this commitment will improve fairness and certainty for users of key projects that the developer ecosystem relies on, including Git and the Linux kernel. More broadly, the GPL Cooperation Commitment provides an example of evolving software regulation to better align with social goals, which is urgently needed as developers and policymakers grapple with the opportunities and risks of the software revolution.” An effective regulation has an enforcement mechanism that encourages compliance. Here, the most severe penalties for non-compliance, such as shutting down a line of business, would be reserved for repeat and intentional violators. The other less serious failures to comply, or accidental non-compliance may only result in warnings following which the violation should be promptly corrected. GPL as a private software regulation The GNU General Public License (GPL) is a tool for a private regulator (copyright holder) to achieve a social goal. The goal can be explained as, “under the license, anyone who receives a covered program has the freedom to run, modify, and share that program.” However, if a developer wishes to regulate, the GPL version 2 has a bug from the perspective of an effective regulator. Due to this bug, “non-compliance results in termination of the license, with no provision for reinstatement. This further makes the license marginally more useful to copyright ‘trolls’ who want to force companies to pay rather than come into compliance.” The bug is fixed in the GPL version 3 by introducing a “cure provision” under which a violator can usually have their license reinstated—if the violation is promptly corrected. Git and the other developer communities including Linux kernel and others use GPLv2 since 1991; many of which are unlikely to ever switch to GPLv3, as this would require agreement from all copyright holders, and not everyone agrees with all of GPLv3’s changes. However, GPLv3’s cure provision is uncontroversial and can be backported to the extent GPLv2 copyright holders agree. This is how GPL Cooperation Commitment helps The GPL Cooperation Commitment is a way for a copyright holder to agree to extend GPLv3’s cure provision to all GPLv2 (also LGPLv2 and LGPLv2.1, which have the same bug) licenses offered. This allows violators a fair chance to come into compliance and have their licenses reinstated. This commitment also incorporates one of several principles (the others do not relate directly to license terms) for enforcing compliance with the GPL and other copyleft licenses as effective private regulation. To know more about GitHub’s support to the GPL Cooperation Commitment, visit its official blog post. GitHub now allows issue transfer between repositories; a public beta version GitHub updates developers and policymakers on EU copyright Directive at Brussels The LLVM project is ditching SVN for GitHub. The migration to Github has begun
Read more
  • 0
  • 0
  • 3131