Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-square-updated-its-terms-of-services-community-raise-concerns-about-restriction-to-use-the-agpl-licensed-software-in-online-stores
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Square updated its terms of services; community raise concerns about restriction to use the AGPL-licensed software

Amrata Joshi
07 Jun 2019
4 min read
Last month, Square a financial services and mobile payment company updated its terms of service effective from this year in July. Developers are raising concerns upon one of the terms of service which restricts the use of AGPL-licensed software in online stores. What is GNU AGPL Affero General Public License The GNU Affero General Public License (AGPL) is a free and copyleft license for software and other kinds of works. AGPL guarantees the freedom for sharing and changing all versions of a program. It protects developers’ right by asserting copyright on the software, and by giving legal permission to copy, distribute and/or modify the software. What does the developer community think about AGPL The Content Restrictions section B-15  under the Online Store, reads, “You will not use, under any circumstance, any open source software subject to the GNU Affero General Public License v.3, or greater.” Few of the developers think that Square has misunderstood AGPL and this rule doesn’t make sense to them. A user commented on HackerNews, “This makes absolutely no sense. I'm almost certain that Square lawyers fucked up big time. They looked at the AGPL and completely misunderstood the context. There is no way in hell anyone can interpret AGPL in a way that makes Square responsible for any license violations their customers make selling software.” While according to few others the code which is licensed under AGPL can’t be used in a website hosted by Square, is what the rule means. Also, if the AGPL code is used by Square then the code might be sent to the browsers along with Square’s own proprietary code. And this could possibly mean that Square has violated AGPL. But a lot of companies follow the same rule, including Google, which clearly states, “WARNING: Code licensed under the GNU Affero General Public License (AGPL) MAY NOT be used at Google.”  But this could be useful for the developers as it keeps the code safe from the big tech companies using it. Chris DiBona, Director of open source at Google, said in a statement to The Register that “Google continues to ban the lightning-rod AGPL open source license within the company because doing so "saves engineering time" and because most AGPL projects are of no use to the company.” According to him, AGPL is designed for closing the "application service provider loophole" in the GPL and which lets ASPs use GPL code without distributing their changes back to the open source community. Under the AGPL, one has to open source their code if they use the AGPL code in their web service, and why would a company like Google do that? As its core components and back-end infrastructure that run its online services are not open source. But it also seems that it is something that needs the interference of lawyers and it is a matter of concern for them as well. https://twitter.com/MarkKriegsman/status/1136589805024923649 Also, the websites using AGPL code might have to provide the entire source code to their back end system. So, few think that AGPL is not an efficient license and they would want to see a better one that goes with the idea of freedom completely. And according to them such licenses should come from copyleft folks and not from the profit-oriented companies. While the rest argue that it is an efficient license and is useful for the developers and giving them enough freedom to share and protecting their software from companies. https://twitter.com/MarkKriegsman/status/1136589799341600769 https://twitter.com/mikeym0p/status/1136392884306010112 https://twitter.com/kjjaeger/status/1136633898526490624 https://twitter.com/fuzzychef/status/1136386203756818433 To know more about this news, check out the post by Square. AWS announces Open Distro for Elasticsearch licensed under Apache 2.0 Blue Oak Council publishes model license version 1.0.0 to simplify software licensing for everyone Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)  
Read more
  • 0
  • 0
  • 2046

article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 3432

article-image-apple-showcases-privacy-innovations-at-wwdc-2019-sign-in-with-apple-adguard-pro-new-app-store-guidelines-and-more
Amrata Joshi
04 Jun 2019
8 min read
Save for later

Apple showcases privacy innovations at WWDC 2019: Sign in with Apple, AdGuard Pro, new App Store guidelines and more

Amrata Joshi
04 Jun 2019
8 min read
Apple is getting pretty serious about user privacy. Last month, Apple had proposed a “privacy-focused” ad click attribution model to count conversions without tracking users. And just yesterday, Apple announced a host of security and privacy-related features at its ongoing Worldwide Developers Conference (WWDC) 2019. Users seem to be excited about the move taken by the company towards privacy and security. While some still seem to be a little confused and looking forward to exploring the major announcements by the company. Experts are indirectly indicating that these major steps by Apple might turn out to be really powerful and might make other tech companies think about their next moves in the same direction. https://twitter.com/ow/status/1135603153712422913 https://twitter.com/jmj/status/1135615177766739973 Sign In with Apple With iOS 13, Apple is introducing a new way to quickly sign into apps and websites with Sign In with Apple. Users can now simply use their Apple ID for authentication purpose instead of using a social account, verifying email addresses, etc. Apple will be protecting users’ privacy by providing developers with a unique random ID. Users also have the option to keep their email address private and can instead share a unique random email address. Sign In comes with built-in two-factor authentication for an added layer of security. The company does not use Sign In with Apple to profile users or their activity in apps. Users can now create a new account on an app with just one click and without revealing any new personal information. Twitter users are quite happy with Apple’s Sign in feature. https://twitter.com/sandofsky/status/1135673287659347968 https://twitter.com/tomwarren/status/1135602700710793217 https://twitter.com/izzydoesizzy/status/1135829977050615808 Apple can now stop third-party sites and services from getting users’ information when they sign up to an app. Apple’s software engineering chief Craig Federighi said at the company’s annual developer conference, “Next I want to turn to login to get a more personalized effect with an app, we all have seen buttons like this, asking us to use a social account login. Now this can be convenient, but it also can come at the cost of your privacy — your personal information sometimes gets shared behind the scenes and these logins can be used to track you. We wanted to solve this and many developers do too. Now we have a solution, it’s called Sign in with Apple. ” One time location sharing Apple will soon let users access their iPhone’s location just once, as the company is soon rolling out one-time location option. “For the first time, you can share your location to an app just once and then require it to ask you again next time at wants,” said Apple software engineering chief Craig Federighi at its annual developer conference on Monday. He also highlighted that a lot of apps try and bypass the location sharing restrictions by simply scanning WiFi and Bluetooth signals in that particular area which could reveal the users’ location. He added, “We’re shutting the door on that abuse as well.” https://twitter.com/ittechbuz/status/1135887736227934211 Apple updates its App Store guidelines Apple has also updated its App Store guidelines to ensure privacy and security enforced for new and existing apps. Here are a few of the highlights from the updated guidelines list. Keeping Kids’ data private Apple has taken a step towards keeping the kids’ data private.Apps in the kids category and apps for kids can’t include any third-party advertising or analytics software and cannot transmit data to third parties. This guideline has been enforced for new apps and even existing apps must follow this guideline by September 3, 2019. https://twitter.com/icastanheda/status/1135672922608087040 HTML game may not provide access to digital commerce The company has made a major move by stating in its guidelines that HTML5 games that are distributed in apps may not provide access to lotteries, real money gaming, or charitable donations and not support digital commerce. This functionality is appropriate only for code that’s embedded in the binary and that can be reviewed by Apple. Also, this guideline is now enforced for new apps and existing apps must follow this guideline by 3rd September 2019. VPN apps cannot provide access to sensitive data to third parties Since VPN provides access to sensitive data, so according to this guideline, VPN apps may not sell, use, or disclose any data to third parties for any purpose, and must commit to this in their privacy policy. The apps that are used for parental control, content blocking and security from approved providers can use the NEVPNManager API. This new guideline may possibly have the popular ad blocker, AdGuard Pro back on iOS.t was discontinued last year because of the App Store policy which said, “Guideline 2.5.1 – Performance – Software Requirements. Your app uses a VPN profile or root certificate to block ads or other content in a third-party app, which is not allowed on the App Store.” The new updates announced in the AppStore Review Guidelines at WWDC may probably make AG Pro compliant with it. https://twitter.com/AdGuard/status/1135660616679645185 https://twitter.com/pveugen/status/1135743658148356096 MDM apps can’t sell/use/disclose data to third parties MDM (Mobile Device Management) provides access to sensitive data, so according to this guideline, MDM apps should request the mobile device management capability. And they may only be offered access by commercial enterprises, such as business organizations, or government agencies, etc, and, in some cases, companies utilizing MDM for parental controls. Also, according to this guideline, MDM apps may not sell, use, or disclose any data to third parties for any purpose, and must also commit to this in their privacy policy. Health data can’t be shared with third parties Apps may use a user’s health data for providing a benefit directly to that user, and the data is not to be shared with a third party. The developer must also disclose to the user the specific health data collected from the device. Information coming in without user’s consent won’t be allowed on App Store Apps that compile information from any source that is not directly coming from the user or without the user’s explicit consent, even public databases for that matter, are not permitted on the App Store. Apps need to get consent for data collection Apps are supposed to get consent for data collection, even if that data is considered anonymous at the time of collection or immediately following it. Many are confused about this latest update, as they have some concerns about using Wikipedia API. https://twitter.com/jcampbell_05/status/1135679675026628608 As developers speculate about the changes in the guidelines, many are still wondering how the change in the rule would affect them and are looking forward to some clarity with the guidelines. Health Apps Apple has also introduced a few health apps that could be useful for users and below mentioned are the highlights from this section: Noise app Apple introduced the Noise app for Apple watchOS 6 that detects loud environments and notifies users when it thinks users at risk for hearing damage. This app uses the watch's built-in microphone for measuring the decibels at concerts, theaters, construction zones, parades, and other loud situations that usually aren't good for the ears. But to achieve this, the app needs to keep track of what the users are listening to, and such apps usually scares people as it appears to be like ‘always-listening technology’. Dr. Sumbul Desai, Apple’s VP of health, clarified, “It only periodically samples and does not record or save any audio.” So users need not worry as none of the audio or sounds in the environment aren’t saved or sent to Apple, according to the company. Menstrual cycle tracking feature Apple also unveiled the menstrual cycle tracking feature, called Cycle Tracking at the conference. Women can now easily log their symptoms and receive notifications when their periods are about to begin. They can also receive a fertility window prediction. This feature is also available in the Health app on iPhone with iOS 13. Apple Vice President of Health Sumbul Desai said, “We are so excited to bring more focus to this incredibly important aspect of women’s health.” But users are concerned over fertility data collection by the company. https://twitter.com/Vince34359049/status/1135677667859034112 While others think that this feature is not new and users have already used such applications for tracking their cycles. https://twitter.com/DrShark/status/1135773575154216960 Apple has taken steps towards strengthening security and maintaining privacy by introducing new features, apps and updating the guidelines, but only time will tell how effective they would turn out to be. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case    
Read more
  • 0
  • 0
  • 3839
Visually different images

article-image-following-eu-china-releases-ai-principles
Vincy Davis
03 Jun 2019
5 min read
Save for later

Following EU, China releases AI Principles

Vincy Davis
03 Jun 2019
5 min read
Last week, the Beijing Academy of Artificial Intelligence (BAAI) released 15-point principles calling for Artificial Intelligence to be beneficial and responsible termed as Beijing AI Principles. It has been proposed as an initiative for the research, development, use, governance and long-term planning of AI. The article is a well-described guideline on the principles to be followed for the research and development of AI, the use of AI, and the governance of AI. The Beijing Academy of Artificial Intelligence (BAAI) is an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. These principles have been developed in collaboration with Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and China’s three big tech firms: Baidu, Alibaba, and Tencent. Research and Development Do Good It states that AI should be developed to benefit all humankind and the environment, and to enhance the well-being of society and ecology. For Humanity AI should always serve humanity and conform to human values as well as the overall interests of humankind. It also specifies that AI should never go against, utilize or harm human beings. Be Responsible Researchers while developing AI should be aware of its potential ethical, legal, and social impacts and risks. They should also be provided with concrete actions to reduce and avoid them. Control Risks AI systems should be developed in a way that ensures the security of data along with the safety and security for the AI system itself. Be Ethical AI systems should be trustworthy, in a way that the system can be traceable, auditable and accountable. Be Diverse and Inclusive The development of AI should reflect diversity and inclusiveness, such that nobody is easily neglected or underrepresented in AI applications. Open and Share An open AI platform will help avoid data/platform monopolies, and share the benefits of AI development. Use of AI Use Wisely and Properly The users of AI systems should have sufficient knowledge and ability to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks. Informed-consent AI systems should be developed such that in an unexpected circumstance, the users' own rights and interests are not compromised. Education and Training Stakeholders of AI systems should be educated and trained to help them adapt to the impact of AI development in psychological, emotional and technical aspects. Governance of AI Optimizing Employment Developers should have a cautious attitude towards the potential impact of AI on human employment. Explorations on Human-AI coordination and new forms of work should be encouraged. Harmony and Cooperation This should be imbibed in an AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis". Adaptation and Moderation Revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. This will prove beneficial to society and nature. Subdivision and Implementation Various fields and scenarios of AI applications should be actively researched, so that more specific and detailed guidelines can be formulated. Long-term Planning Constant research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged, which will make AI always beneficial to society and nature in the future. These AI principles are aimed at enabling the healthy development of AI, in such a way that it supports the human community, for a shared future. This will prove beneficial for humankind and nature, in general. China releasing its version of AI principles, has come as a surprise for many. China has always been infamous for using AI to monitor citizens. This move by China comes after the European High-Level Expert Group on AI released ‘Ethics guidelines for trustworthy AI’ , this year. The Beijing AI Principles provided by BAAI, is similar to the AI principles published by Google last year. Google’s AI principles also provided a guideline for AI applications, such that it becomes beneficial for humans. By releasing its own version of AI principles, is China signalling the world that its ready to talk about AI ethics, especially after the U.S. blacklisted China’s telecom giant Huawei over threat to national security. As expected, users are also surprised with China showing this sudden care towards AI ethics. https://twitter.com/sherrying/status/1133804303150305280 https://twitter.com/EBKania/status/1134246833100865536 While others are impressed with this move by China. https://twitter.com/t_gordon/status/1135491979276685312 https://twitter.com/mgmazarakis/status/1134127349392465920 Visit the BAAI website, to read more details of the Beijing AI Principles. Samsung AI lab researchers present a system that can animate heads with one-shot learning What can Artificial Intelligence do for the Aviation industry Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 3021

article-image-deepminds-ai-uses-reinforcement-learning-to-defeat-humans-in-multiplayer-games
Savia Lobo
03 Jun 2019
3 min read
Save for later

DeepMind's AI uses reinforcement learning to defeat humans in multiplayer games

Savia Lobo
03 Jun 2019
3 min read
Recently, researchers from DeepMind released their research where they designed AI agents that can team up to play Quake III Arena’s Capture the Flag mode. The highlight of this research is, these agents were able to team up against human players or play alongside them, tailoring their behavior accordingly. We have previously seen instances of an AI agent beating humans in video games like StarCraft II and Dota 2. However, these games did not involve agents playing in a complex environment or required teamwork and interaction between multiple players. In their research paper titled, “Human-level performance in 3D multiplayer games with population-based reinforcement learning”, a group of 30 AIs were collectively trained to play five-minute rounds of Capture the Flag, a game mode in which teams must retrieve flags from their opponents while retaining their own. https://youtu.be/OjVxXyp7Bxw While playing the rounds in Capture the Flag the DeepMind AI was able to outperform human teammates, with the reaction time slowed down to that of a typical human player. Rather than a number of AIs teaming up on a group of human players in a game of Dota 2, the AI was able to play alongside them as well. Using Reinforcement learning, the AI taught itself the skill which helped it to pick up the rules of the game over thousands of matches in randomly generated environments. “No one has told [the AI] how to play the game — only if they’ve beaten their opponent or not. The beauty of using [an] approach like this is that you never know what kind of behaviors will emerge as the agents learn,” said Max Jaderberg, a research scientist at DeepMind who recently worked on AlphaStar, a machine learning system that recently bested a human team of professionals at StarCraft II. Greg Brockman, a researcher at OpenAI told The New York Times, “Games have always been a benchmark for A.I. If you can’t solve games, you can’t expect to solve anything else.” According to The New York Times, “such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic.” Talking about limitations, the researchers say, “Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates.” “Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged”, the paper states. To know more about this news in detail, visit the official research paper on Science. OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers Samsung AI lab researchers present a system that can animate heads with one-shot learning Amazon is reportedly building a video game streaming service, says Information  
Read more
  • 0
  • 0
  • 3069

article-image-facebook-argues-it-didnt-violate-users-privacy-rights-and-thinks-theres-no-expectation-of-privacy-because-there-is-no-privacy-on-social-media
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Facebook argues it didn’t violate users' privacy rights and thinks there's no expectation of privacy because there is no privacy on social media

Amrata Joshi
03 Jun 2019
2 min read
With more than a year of scandals and data breach from Facebook, the company leaves no stone unturned to prove itself right and follow ethics washing. The company has also been in the radar of FTC and is expected to be fined around $5 billion because of its user data practices. Last week, Facebook argued that it didn't violate users' privacy rights because there's no expectation of privacy when using social media and the company wants to dismiss a lawsuit related to the Cambridge Analytica scandal, by arguing the same. Facebook counsel Orin Snyder said during a pretrial hearing to dismiss a lawsuit, "There is no invasion of privacy at all because there is no privacy." Facebook didn't deny that third parties accessed users' data, but the company told Vince Chhabria, US District Judge that there's no "reasonable expectation of privacy" on Facebook or any other social media site. But the argument coming from Facebook appears to be more like the company is trying to convince people that it knows how to protect their personal information. This month Sheryl Sandberg, Facebook COO said that she and Mark Zuckerberg at Facebook, will do "whatever it takes" to keep people safe on Facebook. Calls to curb Zuckerberg's control over Facebook have now taken rounds as the issues around data privacy and security continue. It seems Chhabria is making sure that at least some of the lawsuit continues, saying in an order before the hearing (PDF) that the plaintiffs should expect the court to accept their argument that private information was disclosed without express consent. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data
Read more
  • 0
  • 0
  • 2564
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-canva-faced-security-breach-139-million-users-data-hacked-zdnet-reports
Fatema Patrawala
28 May 2019
3 min read
Save for later

Canva faced security breach, 139 million users data hacked: ZDNet reports

Fatema Patrawala
28 May 2019
3 min read
Last Friday, ZDNet reported about Canva’s data breach. Canva is a popular Sydney-based startup which offers a graphic design service. According to the hacker, who directly contacted ZDNet, data of roughly 139 million users has been compromised during the breach. Responsible for the data breach is a hacker known as GnosticPlayers online. Since February this year, they have put up the data of 932 million users on sale, which are reportedly stolen from 44 companies around the world. "I download everything up to May 17," the hacker said to ZDNet. "They detected my breach and closed their database server." Source: ZDNet website In a statement on the Canva website, the company confirmed the attack and has notified the relevant authorities. They also tweeted about the data breach on 24th May as soon as they discovered the hack and recommended their users to change their passwords immediately. https://twitter.com/canva/status/1132086889408749573 “At Canva, we are committed to protecting the data and privacy of all our users and believe in open, transparent communication that puts our communities’ needs first,” the statement said. “On May 24, we became aware of a security incident. As soon as we were notified, we immediately took steps to identify and remedy the cause, and have reported the situation to authorities (including the FBI). “We’re aware that a number of our community’s usernames and email addresses have been accessed.” Stolen data included details such as customer usernames, real names, email addresses, and city & country information. For 61 million users, password hashes were also present in the database. The passwords where hashed with the bcrypt algorithm, currently considered one of the most secure password-hashing algorithms around. For other users, the stolen information included Google tokens, which users had used to sign up for the site without setting a password. Of the total 139 million users, 78 million users had a Gmail address associated with their Canva account. Canva is one of Australia's biggest tech companies. Founded in 2012, since the launch, the site has shot up the Alexa website traffic rank, and has been ranking among the Top 200 popular websites. Three days ago, the company announced it raised $70 million in a Series-D funding round, and is now valued at a whopping $2.5 billion. Canva also recently acquired two of the world's biggest free stock content sites -- Pexels and Pixabay. Details of Pexels and Pixabay users were not included in the data stolen by the hacker. According to reports from Business Insider, the community was dissatisfied with how Canva responded to the attack. IT consultant Dave Hall criticized the wording Canva used in a communication sent to users on Saturday. He believes Canva did not respond fast enough. https://twitter.com/skwashd/status/1132258055767281664 One Hacker News user commented , “It seems as though these breaches have limited effect on user behaviour. Perhaps I'm just being cynical but if you are aren't getting access and you are just getting hashed passwords, do people even care? Does it even matter? Of course names and contact details are not great. I get that. But will this even effect Canva?” Another user says, “How is a design website having 189M users? This is astonishing more than the hack!” Facebook again, caught tracking Stack Overflow user activity and data Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Adobe warns users of “infringement claims” if they continue using older versions of its Creative Cloud products
Read more
  • 0
  • 0
  • 4782

article-image-grafana-6-2-released-with-improved-security-enhanced-provisioning-bar-gauge-panel-lazy-loading-and-more
Vincy Davis
27 May 2019
3 min read
Save for later

Grafana 6.2 released with improved security, enhanced provisioning, Bar Gauge panel, lazy loading and more

Vincy Davis
27 May 2019
3 min read
Last week, Torkel Ödegaard co-founder of Grafana released the stable version Grafana 6.2. This version has improved security, enhanced provisioning workflow, a new Bar Gauge panel, Elasticsearch 7 support, and lazy loading of panels, among other things. Improved Security Datasources will now store passwords and basic auth passwords in ‘secureJsonData’ which will be encrypted by default. Browser caching is now disabled for full page requests, which will enable mitigation of risky sensitive information. Upgrade notes is provided to migrate existing data sources to use encrypted storage. Provisioning Environment variables can now support and reload configs without restarting Grafana. This feature will not allow deletion of provisioned dashboards. Instead, when a user tries to delete or save a provisioned dashboard, a relative file path to the file is shown in the dialog. Bar Gauge Panel This is an exciting feature, which is similar to the current Gauge panel and shares almost all its options. Bar Gauge uses both horizontal and vertical spaces much better, which helps in stacking efficiently. The Bar Gauge also comes with three unique display modes: Basic, Gradient, and Retro LED. Panels Without Title Sometimes panels do not need a title, but still the panel header takes up space. This makes ‘Singlestats’ have bad vertical centering. In version 6.2, Grafana will now allow panel content to use the full panel height, in case there is no panel title. Lazy Loading of Panels Out of View Grafana will not issue any data queries for panels that are not visible. This will greatly reduce the load on the data source backends, when loading dashboards with many panels. This was one of the most requested features from Grafana users. Minor Features and Fixes User time zone support added, called ‘Explore’ Support for configuring timeout durations and retries Support for multiple subscriptions per datasource A small bug fixed which will display percentile metrics in table panel called ‘Elasticsearch’ ‘InfluxDB’ to provide support for POST HTTP verb ‘CloudWatch’ is an important fix for default alias disappearing in v6.1 New ‘Search’ option Ödegaard has also notified users to switch to the new repo soon, as the previous depreciated repo will be removed on July 1. The new repository will contain all the old releases, so the user will not have to upgrade to switch package repository. Users of Grafana are quite happy with the new Grafana 6.2 version. https://twitter.com/PeterZaitsev/status/1131211702169739269 A user on Hacker News commented, “Lazy loading is a feature I was waiting for long time, hopefully this time is here to stay!” Another user added, “Those new gradient bar gauges look great, can't wait to use them on some environmental data.” Read more about the Grafana v6.2 release on the Grafana blog. Grafana 6.0 beta is here with new panel editor UX, google stackdriver datasource, and Grafana Loki among others ‘Tableau Day’ highlights: Augmented Analytics, Tableau Prep Builder and Conductor, and more! Facebook files a lawsuit against South Korean data analytics firm, Rankwave, for unlawful data use amidst high profile calls to “break it up”
Read more
  • 0
  • 0
  • 3799

article-image-first-american-financial-corp-leaked-millions-of-title-insurance-records-krebsonsecurity-reports
Amrata Joshi
27 May 2019
3 min read
Save for later

First American Financial Corp. leaked millions of title insurance records, KrebsOnSecurity reports

Amrata Joshi
27 May 2019
3 min read
Last week, First American Financial Corporation, a provider of title insurance, leaked hundreds of millions of documents related to mortgage deals dated back to 2003, KrebsOnSecurity reports. This vulnerability exposed digitized records such as mortgage and tax records, bank account numbers and statements, wire transaction receipts, social security numbers, and drivers license images without authentication. However, the company said that it had disabled the part of its website that served those files around 2 PM ET on Friday, and thereby addressed the vulnerability soon after it was notified by KrebsOnSecurity. https://twitter.com/briankrebs/status/1132026003386241029 “We are currently evaluating what effect, if any, this had on the security of customer information. We will have no further comment until our internal review is completed”, the company said in a statement. According to KrebsOnSecurity, “Many of the exposed files are records of wire transactions with bank account numbers and other information from home or property buyers and sellers.” Ben Shoval, the developer who notified KrebsOnSecurity about the data exposure, said, “That’s because First American is one of the most widely-used companies for real estate title insurance and for closing real estate deals — where both parties to the sale meet in a room and sign stacks of legal documents.” Shoval even shared a document link given by First American from a recent transaction, which pointed to a record number that was nine digits long and which dated April 2019. Modifying the document number in the link by numbers in either direction would yield other peoples’ records before or after the same date and time. The earliest document number that was available on the site was 000000075 that pointed a real estate transaction from 2003. A spokesperson from the First American Financial Corporation shared the following statement: “First American has learned of a design defect in an application that made possible unauthorized access to customer data.  At First American, security, privacy and confidentiality are of the highest priority and we are committed to protecting our customers’ information. The company took immediate action to address the situation and shut down external access to the application. We are currently evaluating what effect, if any, this had on the security of customer information. We will have no further comment until our internal review is completed.” The information leaked by First American would have been misused by scammers involved in Business Email Compromise (BEC) scams, which would impersonate real estate agents. https://twitter.com/scottpants/status/1132031820361420801 https://twitter.com/aznalabukm/status/1132807048092147713 To know more about this news, check out the post by KrebsOnSecurity. A WhatsApp vulnerability enabled attackers to inject Israeli spyware on user’s phones A WhatsApp vulnerability enabled attackers to inject Israeli spyware on user’s phones Rust’s recent releases 1.34.0 and 1.34.1 affected from a vulnerability that can cause memory unsafety
Read more
  • 0
  • 0
  • 1291

article-image-applitools-announces-2019-state-of-automated-visual-testing-report-that-highlights-the-competitive-advantages-of-visual-quality
Amrata Joshi
27 May 2019
3 min read
Save for later

Applitools announces ‘2019 State of Automated Visual Testing Report’ that highlights the competitive advantages of visual quality

Amrata Joshi
27 May 2019
3 min read
Last week, Applitools, the provider of AI-powered end-to-end visual testing and monitoring, announced the “2019 State of Automated Visual Testing Report.” According to the report, as the number of screens and pages across applications, operating systems websites, and devices continue to grow, the continuous management of a web application’s visual quality becomes a competitive advantage for businesses worldwide. The report has been conducted as an independent survey of over 350 companies and it outlines the research findings for visual testing and quality. It further identifies key patterns that bring excellence in Application Visual Management. With the help of this research, organizations can now better understand the business challenges and opportunities that are associated with visual quality. This year’s Automated Visual Testing Report covers a few important findings: Visual bugs are common and they typically cost the R&D team between $1.75m and $6.9m annually to fix. The reports further says that the average release to production has 9 visual bugs. Also, over 30 percent of companies release more than 22 bugs per release that cost them over $143,000 per release. For a team that is pushing towards CI-CD (continuous integration and continuous delivery) and releasing only four times per month, these common visual bugs affect the visual quality. According to the report, with the increasing number of screens and pages, the increasing expectations of faster release cycles, the goal of continuous visual quality is going to be much more challenging in the future which would underscore the need for Visual AI to help meet it. The findings further state that CI/CD and Digital Transformation initiatives are necessary for dealing with the enormous challenge of visual quality and according to over 64 percent of companies surveyed, these initiatives are either non-existent or failing to deliver as planned. Companies that are using automated visual testing are building competitive advantage via improvements to quality, coverage, release velocity, and team morale. The findings show the overall app test coverage increasing by over 60 percent. It further shows the Visual quality improving by 3.6x and the monthly release velocity more than doubles. According to the report, only 12 percent of companies surveyed are using automated visual testing as of Q1 2019, which suggests that competitive advantage is possible for those companies who quickly move to adopt the technique this year. Also by the end of this year, there would be an additional 38 percent of companies who would have initiated automated visual testing as a core strategy. Gil Sever, Co-Founder and CEO of Applitools wrote in an email to us, “Today, software equals brand. Managing application quality effectively as releases occur more frequently is becoming a competitive advantage for all companies, regardless of vertical market, company size, or geography.” He further added, “Continuous visual quality is now a goal for QA and software development teams as the stakes continue to get higher for organizations competing for customer attention and retention in this age of across the board digital transformation.” Applitools introduces AI based automated root cause analysis to pinpoint bugs quickly 5 ways artificial intelligence is upgrading software engineering Top 5 automated testing frameworks  
Read more
  • 0
  • 0
  • 1145
article-image-irelands-data-protection-commission-initiates-an-inquiry-into-googles-online-ad-exchange-services
Savia Lobo
23 May 2019
3 min read
Save for later

Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services

Savia Lobo
23 May 2019
3 min read
Ireland’s Data Protection Commission (DPC) opened an inquiry into Google Ireland Ltd. over user data collection during online advertising. The DPC will enquire whether Google’s online Ad Exchange was compliant to general data protection regulations (GDPR). The Data Protection Commission became the lead supervisory authority for Google in the European Union in January, this year. This is the Irish commission’s first statutory inquiry into Google since then. DPC also offers a so-called "One Stop Shop" for data protection regulation across the EU. This investigation follows last year’s privacy complaint filed under Europe’s GDPR pertaining to Google Adtech’s real-timing bidding (RTB) system. This complaint was filed by a host of privacy activists and Dr. Johnny Ryan of private browser Brave. Ryan accused Google’s internet ad services business, DoubleClick/Authorized Buyers, of leaking users’ intimate data to thousands of companies. Google bought the advertising serving and tracking company, DoubleClick, for $3.1bn (£2.4bn) in 2007. DoubleClick uses web cookies to track browsing behavior online by IP addresses to deliver targeted ads. Also, this week, a new GDPR complaint against Real-Time Bidding (RTB) was filed in Spain, Netherlands, Belgium, and Luxembourg. https://twitter.com/mikarv/status/1130374705440018433 Read More: GDPR complaint in EU claim billions of personal data leaked via online advertising bids Ireland’s statutory inquiry is pursuant to section 110 of the Data Protection Act 2018 and will also investigate based on the various suspicions received. “The GDPR principles of transparency and data minimization, as well as Google’s retention practices, will also be examined”, the DPC blog mentions. It has been a year since GDPR was introduced on May 25, 2018, which gave Europeans new powers in how they can control their data. Ryan said in a statement, “Surveillance capitalism is about to become obsolete. The Irish Data Protection Commission’s action signals that now — nearly one year after the GDPR was introduced — a change is coming that goes beyond just Google. We need to reform online advertising to protect privacy, and to protect advertisers and publishers from legal risk under the GDPR”. https://twitter.com/johnnyryan/status/1131246597139062791 Google was also fined a sum of 50 million euros ($56 million) earlier this year by France’s privacy regulator, in the first penalty for a U.S. tech giant since the EU’s GDPR law was introduced. Also, in March, the EU fined Google 1.49 billion euros for antitrust violations in online advertising, a third antitrust fine by the European Union against Google since 2017. Read More: European Union fined Google 1.49 billion euros for antitrust violations in online advertising A Google spokesperson told CNBC, “We will engage fully with the DPC’s investigation and welcome the opportunity for further clarification of Europe’s data protection rules for real-time bidding. Authorized buyers using our systems are subject to stringent policies and standards.” To know more about this news, head over to DPC’s official press release. EU slaps Google with $5 billion fine for the Android antitrust case U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices
Read more
  • 0
  • 0
  • 2033

article-image-google-and-binomial-come-together-to-open-source-basis-universal-texture-format
Amrata Joshi
23 May 2019
3 min read
Save for later

Google and Binomial come together to open-source Basis Universal Texture Format

Amrata Joshi
23 May 2019
3 min read
Yesterday, the team at Google and Binomial announced that they have partnered to open source the Basis Universal texture codec, a supercompressed GPU texture and texture video compression system. It is used for improving the performance of transmitting images on the web and also within the desktop and mobile applications while maintaining GPU efficiency. This system fills an important gap in the graphics compression ecosystem and further complements earlier work in Draco geometry compression, used for compressing and decompressing 3D geometric meshes and point clouds. The Basis Universal texture format is also 6-8 times smaller than JPEG on the GPU, also a great alternative to current GPU compression methods that are inefficient. It is used to create compressed textures that work well for use cases such as games, maps, photos, virtual & augmented reality, small-videos, etc. Without a universal texture format, developers either have to use GPU formats and take the storage size or else use other formats with reduced storage size. But it is difficult to maintain too many different GPU formats as it is a burden on the whole ecosystem including GPU manufacturers to software developers to the end user. Image source: Google blog How does Basis Universal texture format work? First, the image needs to be compressed using the encoder and the quality settings that suit the project needs to be chosen. Users can also submit multiple images for small videos or optimization purposes. The transcoder code needs to be inserted before rendering that turns the intermediary format into GPU format that can be easily readable the computer. This way the image stays compressed throughout the process, and even on the GPU. So, the GPU will read only the parts it needs to read instead of decoding and reading the whole image. Major improvements The Basis Universal texture format now supports up to 16K codebooks for both endpoints and selectors for higher quality textures. It uses a new prediction scheme for block endpoints. With this release, the RLE codes are implemented for all symbol types for high efficiency on simpler textures. Google’s official blog post reads, “With this partnership, we hope to see the adoption of the transcoder in all major browsers to make performant cross-platform compressed textures accessible to everyone via the WebGL API, and the forthcoming WebGPU API.” To know more about this news, check out Google’s blog post. G Suite administrators’ passwords were unhashed for 14 years, notifies Google Tor Browser 8.5, the first stable version for Android, is now available on Google Play Sore! Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset    
Read more
  • 0
  • 0
  • 3230

article-image-samsung-ai-lab-researchers-present-a-system-that-can-animate-heads-with-one-shot-learning
Amrata Joshi
23 May 2019
5 min read
Save for later

Samsung AI lab researchers present a system that can animate heads with one-shot learning

Amrata Joshi
23 May 2019
5 min read
Some of the recent works have shown how to obtain highly realistic human head images by training convolutional neural networks to generate them. For creating such a personalized talking head model, these works require training on a large dataset of images of a single person. So the researchers from Samsung AI Center presented a system with few-shot capability. They have presented the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. The system performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators. https://twitter.com/DmitryUlyanovML/status/1131155659305705472 The system is capable of initializing the parameters of both the generator and the discriminator in a person-specific way such that the training can be based on just a few images and can be done quickly. The researchers have shown in the paper that such an approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings. The researchers have considered the task of creating personalized photo realistic talking head models or systems that can synthesize video-sequences of speech expressions and mimics of a particular individual. https://youtu.be/p1b5aiTrGzY To be more specific the researchers have considered the problem of synthesizing photorealistic personalized head images with a set of face landmarks, which drive the animation of the model. Such a system has practical applications for telepresence, including videoconferencing, multi-player games, and in special effects industry. Why is synthesizing realistic talking head sequences difficult? Synthesizing realistic talking head sequences is difficult because of two major reasons. The first issue is that the human heads have high photometric, geometric and kinematic complexity so it is difficult to model faces. The second is complicating factor is the acuteness of the human visual system so even minor mistakes in the appearance while modelling can cause a problem. What the researchers have done to overcome the problem? The researchers have presented a system for creating talking head models from a handful of photographs which is also called few-shot learning. The system can also generate a result based on a single photograph, this process is also known as one-shot learning. But adding a few more photographs increases the fidelity of personalization. The talking heads created by the researchers’ system can handle a large variety of poses that goes beyond the abilities of warping-based systems. The few-shot learning ability is obtained by extensive pre-training (meta-learning) on a large corpus of talking head videos that correspond to different speakers with diverse appearance. In the course of meta-learning, this system simulates few-shot learning tasks and also learns to transform landmark positions into realistically-looking personalized photographs. A handful of photographs of a new person will set up a new adversarial learning problem with high-capacity generator and discriminator that are pre-trained via meta-learning. The new problem converges to the state that would generate realistic and personalized images post a few training steps. In the experiments, the researchers have provided comparisons of talking heads created by their system with alternative neural talking head models through quantitative measurements and a user study. They have also demonstrated several use cases of their talking head models which includes video synthesis using landmark tracks extracted from video sequences and puppeteering (video synthesis of a certain person based on the face landmark tracks of a different person). The researchers have used two datasets with talking head videos for quantitative and qualitative evaluation: VoxCeleb1 [26] (256p videos at 1 fps) and VoxCeleb2 [8] (224p videos at 25 fps), with the second one having approximately 10 times more videos than the first one. The first dataset, VoxCeleb1 is used for comparison with baselines and ablation studies, the researchers show the potential of their approach with the second dataset, VoxCeleb2. To conclude, researchers have presented a framework for meta-learning of adversarial generative models that can train highly realistic virtual talking heads in the form of deep generator networks. A handful of photographs (could be as little as one) is needed to create a new model, but the model that is trained on 32 images achieves perfect realism and personalization score in their user study (for 224p static images). The key limitations of the method are the mimics representation and the lack of landmark adaptation. The landmarks from a different person can lead to a noticeable personality mismatch. If someone wants to create “fake” puppeteering videos without such mismatch then, in that case, some landmark adaptation is needed. The paper further reads, “We note, however, that many applications do not require puppeteering a different person and instead only need the ability to drive one’s own talking head. For such scenario, our approach already provides a high-realism solution.” To know more about this news, check out the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. Samsung opens its AI based Bixby voice assistant to third-party developers Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 4551
article-image-google-and-facebook-allegedly-pressured-and-arm-wrestled-eu-expert-group-to-soften-european-guidelines-for-fake-news-open-democracy-report
Fatema Patrawala
22 May 2019
6 min read
Save for later

Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report

Fatema Patrawala
22 May 2019
6 min read
Yesterday Open Democracy reported on how Investigate Europe has revealed, that the EU’s instruments against disinformation remained largely ineffective. A day before the European elections, tech giants, Facebook and Google have been alleged to sabotage the designing of EU regulation for fake news and disinformation. According to a new testimony which Investigate Europe collected from insiders, Google and Facebook pressured and “arm-wrestled” a group of experts to soften European guidelines on online disinformation and fake news. The EU’s expert group met last year as a response to the spread of fake news and disinformation seen in the Brexit referendum and in the US election of President Donald Trump in 2016. The task for the experts was to help prevent the spread of disinformation, particularly at the time of European parliamentary elections now. It was in March last year that the expert group’s report was published and then the same year in September the  EU Code of Practice on Disinformation was announced where the platforms had agreed to self-regulate following common standards. The European Union pushed platforms like Google, Facebook and Twitter to sign a voluntary Code of Practice. The tech companies did commit themselves to name their  advertising clients and to act against fake accounts, ie false identities on their platforms. They had also agreed to investigate spread of disinformation and fake news on the platforms. In addition, representatives from Facebook, Google and Twitter had also agreed to submit monthly reports to the EU Commissioners. "It's the first time in the world that companies have voluntarily agreed to self-regulatory measures to combat misinformation," the commission proclaimed. The expert group confirmed to Investigate Europe that Facebook and Google representatives undermined the work and opposed the proposals to be more transparent about their business models. During the group’s third meeting in March 2018, "There was heavy arm-wrestling in the corridors from the platforms to conditionalise the other experts", says a member of the group, under the condition of anonymity. Another member Monique Goyens – director-general of BEUC, says, "We were blackmailed,". Goyens further added that, “We wanted to know whether the platforms were abusing their market power,". In response to this Facebook’s chief lobbyist, Richard Allan said to her: "We are happy to make our contribution, but if you go in that direction, we will be controversial." He also threatened the expert group members saying that if they did not stop talking about competition tools, Facebook would stop its support for journalistic and academic projects. Google influenced and bribed the expert group members Goyens added that the Google did not have to fight too hard as they had influenced the group member in other ways too. She added that 10 organisations with representatives in the expert group received money from Google. One of them was Reuters Institute for the Study of Journalism, at the University of Oxford. By 2020, the institute will have received almost €10m from Google to pay for its annual Digital News Report. A number of other organisations represented on the group did also receive funding from the Google Digital News Initiative, including the Poynter Institute and First Draft News. Ska Keller, the German MEP said, “It's been known for some time that Google, Facebook and other tech companies give money to academics and journalists. There is a problem because they can use the threat of stopping this funding if these academics or journalists criticise them in any reporting they do." Code of practice was not delivered as strongly as it was laid down A year later, the code of conduct with the platforms is no more than voluntary. The platforms agreed to take stronger action against fake accounts, to give preference to trustworthy sources and to make it transparent to their users but the progress has been limited. The results of code of practice criticism came from a ‘Sounding Board’ that was convened by the European Commission to track the proposals drawn up in response to the expert group’s report. The Sounding Board, which included representatives from media, civil society and academia, said that the code of practice “contains no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor process, and no compliance or enforcement tool. “It is by no means self-regulation, and therefore the platforms, despite their efforts, have not delivered a code of practice. More systematic information is needed for the Commission to assess the efforts deployed by the online platforms to scrutinise the placement of ads and to better understand the effectiveness of the actions taken against bots and fake accounts,” four commissioners said in a statement issued in March. Goyens concluded saying, “The code of conduct was total crap. It was just a fig leaf. The whole thing was a rotten exercise. It was just taking more time, extending more time.” However there are reactions on Twitter about the story that it might be a disinformation in itself. A twitter user said that Facebook and Google opposed sharing their algorithms for ranking content, and how this would help EU in fighting disinformation is unknown. https://twitter.com/Je5usaurus_Rex/status/1130757849926197249 While discussions on Hacker News revolve around the context of fake news with the history of propaganda and censorship, one of the user commented, “The thing that makes me very concerned is the lack of context in these discussions of fake news of the history and continuing use of government propaganda. I hope young people who are familiar with "fake news" but not necessarily as familiar with the history of propaganda and censorship will study that history. The big issue is there is enthusiasm for censorship, and the problem with censorship is who gets to decide what is real information and what is fake. The interests with the most power will have more control over information as censorship increases. Because the same power that is supposedly only used to suppress propaganda from some other country is used to suppress internal dissent or criticism. This is actually very dangerous for multiple reasons. One big reason is that propaganda (internal to the country, i.e. by the same people who will be deciding what is fake news) is usually critical in terms of jump-starting and maintaining enthusiasm for wars.” Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing
Read more
  • 0
  • 0
  • 2186

article-image-amazon-resists-public-pressure-to-re-assess-its-facial-recognition-business-failed-to-act-responsibly-says-aclu
Fatema Patrawala
21 May 2019
6 min read
Save for later

Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU

Fatema Patrawala
21 May 2019
6 min read
Amazon’s facial recognition business will mark a significant setback this week if privacy and civil liberties advocates and shareholders get their way. Yesterday, ACLU wrote in an open letter to Amazon shareholders that, Amazon shareholders hold the power to stop Amazon’s deployment of its invasive face surveillance technology, Rekognition. https://twitter.com/Matt_Cagle/status/1130586385595789312 This year in January, shareholders proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. The technology was found to be biased and inaccurate and is regarded as an enabler of racial discrimination of minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far, and Amazon has also pitched it to Immigration and Customs Enforcement. The first proposal asked the Board of Directors to stop sales of “Rekognition” — Amazon’s face surveillance technology — to the government. The second demands an independent review of its human and civil rights impacts, particularly for people of color, immigrants, and activists, who have always been disproportionately impacted by surveillance. ACLU in its letter backs the measures and calls on for shareholders to pass these resolutions. Amazon’s non responsiveness and its failure to act ACLU’s letter will be presented at Amazon’s annual shareholder meeting on Wednesday. There ACLU plans to accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government. “Amazon has stayed the course,” states the letter. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers, let alone address them”. “This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” as per the letter. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.” “As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added. “Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read. Facial recognition has become one of the most hot-button topics in privacy in years. Recently Amnesty International in Canada raised serious privacy issues on the Google’s Sidewalk Labs project in Toronto. There was concerns around the project’s potential to normalize mass surveillance and is a threat to human rights. While Amazon Rekognition, a cloud-based facial recognition system, remains in its infancy, it is yet one of the most prominent systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested. This latest move is a concerted effort by dozens of shareholders and investment firms, tech experts and academics, privacy and rights groups and organizations who decry the use of facial recognition technology. In the month of March, top AI researchers including this year’s winner of the Turing Award, Yoshua Bengio also issued a joint statement calling on Amazon Web Services to stop all sales of its Rekognition facial-recognition technology to law enforcement agencies. There has been a pushback even from government. Several municipalities have rolled out surveillance-curtailing laws and ordinances this year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition. In April, the Oakland Privacy Advisory Commission released 2 key documents, an initiative to protect Oaklanders’ privacy namely, Proposed ban on facial recognition and City of Oakland Privacy Principles. “Amazon leadership has failed to recognize these issues,” said the ACLU’s letter. “This failure will lead to real-life harm.” The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.” Miles Brundage, a Research scientist at OpenAI commented that every facial recognition expert agrees Amazon is behaving irresponsible and yet it is continuing. He says Amazon is making a big misguided bet on first mover advantages in this area. https://twitter.com/Miles_Brundage/status/1130504230110941184 Amazon has pushed back against the claims by arguing that the technology is accurate and has largely criticized how the ACLU conducted its tests using Rekognition. https://twitter.com/natashanyt/status/1130470342974222336 Amazon states, “Machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly. We should not throw away the oven because the temperature could be set wrong and burn the pizza. It is a very reasonable idea, however, for the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.” It is interesting that Amazon compares facial recognition tech with an oven to justify its decision to sell the tech while transferring the accountability to the buyer and user of the tech. Unlike an oven which has a narrow range of applications and clear liabilities for misuse of such a device, to say willfully cause harm to another person, facial recognition applications are wide and varied and no clear regulations or accountability boundaries exist for harms caused using it. In the absence of clear laws and regulations, enabling state surveillance and control with facial recognition products that are used by law enforcement and governing bodies, can set a dangerous precedent and curtail individual freedoms and exacerbate socio-economic inequalities. Hence, it is not only important but essential for civil liberty groups like ACLU to urge Amazon and its shareholders to act quick. Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake
Read more
  • 0
  • 0
  • 1806