Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-amazon-shareholders-reject-proposals-to-ban-sale-of-facial-recognition-tech-to-govt-and-to-conduct-independent-review-of-its-human-and-civil-rights-impact
Fatema Patrawala
23 May 2019
3 min read
Save for later

Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact

Fatema Patrawala
23 May 2019
3 min read
According to reports from Reuters, Amazon shareholders on Wednesday rejected the proposal on ban of selling its facial recognition tech to governments. The shareholders also rejected other proposals like climate change policy, salary transparency, and other equity issues. Amazon’s annual proxy statement included 11 resolutions, and it has been reported that all 11 resolutions were rejected by shareholders. This year in January, activist shareholders proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. The technology was found to be biased and inaccurate and is regarded as an enabler of racial discrimination of minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far, and Amazon has also pitched it to Immigration and Customs Enforcement. The first proposal asked the Board of Directors to stop sales of “Rekognition” — Amazon’s face surveillance technology — to the government. The second demands an independent review of its human and civil rights impacts, particularly for people of color, immigrants, and activists, who have always been disproportionately impacted by surveillance. The resolutions failed despite an effort by the ACLU and other civil rights groups to back the measures. The civil liberties group on Tuesday wrote an open letter to the tech giant of being “non-responsive” to privacy concerns. https://twitter.com/Matt_Cagle/status/1130586385595789312 Shankar Narayan, from ACLU Washington, made strong remarks on the vote, “The fact that there needed to be a vote on this is an embarrassment for Amazon’s leadership team. It demonstrates shareholders do not have confidence that company executives are properly understanding or addressing the civil and human rights impacts of its role in facilitating pervasive government surveillance.” “While we have yet to see the exact breakdown of the vote, this shareholder intervention should serve as a wake-up call for the company to reckon with the real harms of face surveillance and to change course,” he said. The ACLU in its letter said investors and shareholders hold the power to protect Amazon from its own failed judgment. Amazon pushed back the claims that the technology is inaccurate, and called on the U.S. Securities and Exchange Commission to block the shareholder proposal prior to its annual shareholder meeting. But ACLU blocked Amazon’s efforts to stop the vote, amid growing scrutiny of its product. According to an Amazon spokeswoman, the resolutions failed by a wide margin. Amazon has defended its work and said all users must follow the law. It also added a web portal for people to report any abuse of the service here. The votes were non-binding, allowing the company to reject the outcome of the vote. But it was inevitable that the votes were set to fail, as Amazon CEO Jeff Bezos holds 16% of its stock and voting rights. The company’s other four institutional shareholders, including The Vanguard Group, Blackrock, FMR and State Street, collectively hold about the same amount of voting rights as Bezos. The members of the Congress also met at the House Committee hearing on Wednesday, to discuss the civil rights impact of all facial recognition technology. Responding to the shareholder vote, Democratic U.S. Representative Jimmy Gomez said, “that just means that it’s more important that Congress acts.” Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon S3 is retiring support for path-style API requests; sparks censorship fears
Read more
  • 0
  • 0
  • 2165

article-image-irelands-data-protection-commission-initiates-an-inquiry-into-googles-online-ad-exchange-services
Savia Lobo
23 May 2019
3 min read
Save for later

Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services

Savia Lobo
23 May 2019
3 min read
Ireland’s Data Protection Commission (DPC) opened an inquiry into Google Ireland Ltd. over user data collection during online advertising. The DPC will enquire whether Google’s online Ad Exchange was compliant to general data protection regulations (GDPR). The Data Protection Commission became the lead supervisory authority for Google in the European Union in January, this year. This is the Irish commission’s first statutory inquiry into Google since then. DPC also offers a so-called "One Stop Shop" for data protection regulation across the EU. This investigation follows last year’s privacy complaint filed under Europe’s GDPR pertaining to Google Adtech’s real-timing bidding (RTB) system. This complaint was filed by a host of privacy activists and Dr. Johnny Ryan of private browser Brave. Ryan accused Google’s internet ad services business, DoubleClick/Authorized Buyers, of leaking users’ intimate data to thousands of companies. Google bought the advertising serving and tracking company, DoubleClick, for $3.1bn (£2.4bn) in 2007. DoubleClick uses web cookies to track browsing behavior online by IP addresses to deliver targeted ads. Also, this week, a new GDPR complaint against Real-Time Bidding (RTB) was filed in Spain, Netherlands, Belgium, and Luxembourg. https://twitter.com/mikarv/status/1130374705440018433 Read More: GDPR complaint in EU claim billions of personal data leaked via online advertising bids Ireland’s statutory inquiry is pursuant to section 110 of the Data Protection Act 2018 and will also investigate based on the various suspicions received. “The GDPR principles of transparency and data minimization, as well as Google’s retention practices, will also be examined”, the DPC blog mentions. It has been a year since GDPR was introduced on May 25, 2018, which gave Europeans new powers in how they can control their data. Ryan said in a statement, “Surveillance capitalism is about to become obsolete. The Irish Data Protection Commission’s action signals that now — nearly one year after the GDPR was introduced — a change is coming that goes beyond just Google. We need to reform online advertising to protect privacy, and to protect advertisers and publishers from legal risk under the GDPR”. https://twitter.com/johnnyryan/status/1131246597139062791 Google was also fined a sum of 50 million euros ($56 million) earlier this year by France’s privacy regulator, in the first penalty for a U.S. tech giant since the EU’s GDPR law was introduced. Also, in March, the EU fined Google 1.49 billion euros for antitrust violations in online advertising, a third antitrust fine by the European Union against Google since 2017. Read More: European Union fined Google 1.49 billion euros for antitrust violations in online advertising A Google spokesperson told CNBC, “We will engage fully with the DPC’s investigation and welcome the opportunity for further clarification of Europe’s data protection rules for real-time bidding. Authorized buyers using our systems are subject to stringent policies and standards.” To know more about this news, head over to DPC’s official press release. EU slaps Google with $5 billion fine for the Android antitrust case U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices
Read more
  • 0
  • 0
  • 2033

article-image-tp-link-kept-thousands-of-vulnerable-routers-at-risk-of-remote-hijack-failed-to-alert-customers
Vincy Davis
23 May 2019
3 min read
Save for later

TP-Link kept thousands of vulnerable routers at risk of remote hijack, failed to alert customers

Vincy Davis
23 May 2019
3 min read
Yesterday, TechCrunch reported that thousands of TP-Link routers are still vulnerable to a bug, discovered in January 2018. This vulnerability can allow any low-skilled attacker to remotely gain full access to an affected vulnerable router. The attacker could also target a vulnerable device, in a massive way, by searching the web thoroughly and hijacking routers by using default passwords, the way Mirai botnet had downed Dyn. TP-Link updated the firmware page sharing this vulnerability to their customers, only after TechCrunch reached out to them. https://twitter.com/zackwhittaker/status/1131221621287604229 In October 2017, Andrew Mabbitt (founder of U.K. cybersecurity firm, Fidus Information Security) had first discovered and disclosed a remote code execution bug in TP-Link WR940N router. The multiple vulnerabilities occurred due to multiple code paths calling strcpy on user controllable unsanitized input. TP-Link later released a patch for the vulnerable router in November 2017. Again in January 2018, Mabbitt warned TP-Link that another router WR740N was also at risk by the same bug. This happened because the company reused the same vulnerable code for both the devices. TP-Link asked Mabbitt for more details about CVE-2017-13772 (wr940n model) vulnerability. After providing the details, Mabbitt requested for an update thrice and warned them of public disclosure in March, if they did not provide an update. Later on 28th March 2018, TP-Link provided Mabbitt with a beta version of the firmware to fix the issue. He confirmed that the issue has been fixed and requested TP-Link to release the live version of the firmware. After receiving no response from TP-Link for another month, Mabbitt then publicly disclosed the vulnerability on 26th April 2018. The patch was still not fixed by then. When TechCrunch enquired, the firmware update for WR740N was missing on the company’s website till 16th May 2019. A TP-Link spokesperson told TechCrunch that the update was, “currently available when requested from tech support” and did not explain the reason. It was only when TechCrunch highlighted this issue did TP-Link, they updated the firmware page on 17th May 2019, to include the latest security update. They have specified that the firmware update is meant to resolve issues that the previous firmware version may have and improve its current performance. In a statement to TechCrunch, Mabbitt said, “TP-Link still had a duty of care to alert customers of the update if thousands of devices are still vulnerable, rather than hoping they will contact the company’s tech support.” This has been a highly irresponsible behavior from TP-Link’s end. Even after, a third person discovered its bug more than a year ago, TP-Link did not even bother to keep their users updated about it. This news comes at a time when both the U.K. and the U.S. state of California are set to implement laws to improve Internet of Things security. Soon companies will require devices to be sold with unique default passwords to prevent botnets from hijacking internet-connected devices at scale and using their collective internet bandwidth to knock websites offline. https://twitter.com/dane/status/1131224748577312769 Read More Approx. 250 public network users affected during Stack Overflow’s security attack Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips A WhatsApp vulnerability enabled attackers to inject Israeli spyware on user’s phones
Read more
  • 0
  • 0
  • 2909

article-image-google-and-binomial-come-together-to-open-source-basis-universal-texture-format
Amrata Joshi
23 May 2019
3 min read
Save for later

Google and Binomial come together to open-source Basis Universal Texture Format

Amrata Joshi
23 May 2019
3 min read
Yesterday, the team at Google and Binomial announced that they have partnered to open source the Basis Universal texture codec, a supercompressed GPU texture and texture video compression system. It is used for improving the performance of transmitting images on the web and also within the desktop and mobile applications while maintaining GPU efficiency. This system fills an important gap in the graphics compression ecosystem and further complements earlier work in Draco geometry compression, used for compressing and decompressing 3D geometric meshes and point clouds. The Basis Universal texture format is also 6-8 times smaller than JPEG on the GPU, also a great alternative to current GPU compression methods that are inefficient. It is used to create compressed textures that work well for use cases such as games, maps, photos, virtual & augmented reality, small-videos, etc. Without a universal texture format, developers either have to use GPU formats and take the storage size or else use other formats with reduced storage size. But it is difficult to maintain too many different GPU formats as it is a burden on the whole ecosystem including GPU manufacturers to software developers to the end user. Image source: Google blog How does Basis Universal texture format work? First, the image needs to be compressed using the encoder and the quality settings that suit the project needs to be chosen. Users can also submit multiple images for small videos or optimization purposes. The transcoder code needs to be inserted before rendering that turns the intermediary format into GPU format that can be easily readable the computer. This way the image stays compressed throughout the process, and even on the GPU. So, the GPU will read only the parts it needs to read instead of decoding and reading the whole image. Major improvements The Basis Universal texture format now supports up to 16K codebooks for both endpoints and selectors for higher quality textures. It uses a new prediction scheme for block endpoints. With this release, the RLE codes are implemented for all symbol types for high efficiency on simpler textures. Google’s official blog post reads, “With this partnership, we hope to see the adoption of the transcoder in all major browsers to make performant cross-platform compressed textures accessible to everyone via the WebGL API, and the forthcoming WebGPU API.” To know more about this news, check out Google’s blog post. G Suite administrators’ passwords were unhashed for 14 years, notifies Google Tor Browser 8.5, the first stable version for Android, is now available on Google Play Sore! Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset    
Read more
  • 0
  • 0
  • 3230

article-image-samsung-ai-lab-researchers-present-a-system-that-can-animate-heads-with-one-shot-learning
Amrata Joshi
23 May 2019
5 min read
Save for later

Samsung AI lab researchers present a system that can animate heads with one-shot learning

Amrata Joshi
23 May 2019
5 min read
Some of the recent works have shown how to obtain highly realistic human head images by training convolutional neural networks to generate them. For creating such a personalized talking head model, these works require training on a large dataset of images of a single person. So the researchers from Samsung AI Center presented a system with few-shot capability. They have presented the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. The system performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators. https://twitter.com/DmitryUlyanovML/status/1131155659305705472 The system is capable of initializing the parameters of both the generator and the discriminator in a person-specific way such that the training can be based on just a few images and can be done quickly. The researchers have shown in the paper that such an approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings. The researchers have considered the task of creating personalized photo realistic talking head models or systems that can synthesize video-sequences of speech expressions and mimics of a particular individual. https://youtu.be/p1b5aiTrGzY To be more specific the researchers have considered the problem of synthesizing photorealistic personalized head images with a set of face landmarks, which drive the animation of the model. Such a system has practical applications for telepresence, including videoconferencing, multi-player games, and in special effects industry. Why is synthesizing realistic talking head sequences difficult? Synthesizing realistic talking head sequences is difficult because of two major reasons. The first issue is that the human heads have high photometric, geometric and kinematic complexity so it is difficult to model faces. The second is complicating factor is the acuteness of the human visual system so even minor mistakes in the appearance while modelling can cause a problem. What the researchers have done to overcome the problem? The researchers have presented a system for creating talking head models from a handful of photographs which is also called few-shot learning. The system can also generate a result based on a single photograph, this process is also known as one-shot learning. But adding a few more photographs increases the fidelity of personalization. The talking heads created by the researchers’ system can handle a large variety of poses that goes beyond the abilities of warping-based systems. The few-shot learning ability is obtained by extensive pre-training (meta-learning) on a large corpus of talking head videos that correspond to different speakers with diverse appearance. In the course of meta-learning, this system simulates few-shot learning tasks and also learns to transform landmark positions into realistically-looking personalized photographs. A handful of photographs of a new person will set up a new adversarial learning problem with high-capacity generator and discriminator that are pre-trained via meta-learning. The new problem converges to the state that would generate realistic and personalized images post a few training steps. In the experiments, the researchers have provided comparisons of talking heads created by their system with alternative neural talking head models through quantitative measurements and a user study. They have also demonstrated several use cases of their talking head models which includes video synthesis using landmark tracks extracted from video sequences and puppeteering (video synthesis of a certain person based on the face landmark tracks of a different person). The researchers have used two datasets with talking head videos for quantitative and qualitative evaluation: VoxCeleb1 [26] (256p videos at 1 fps) and VoxCeleb2 [8] (224p videos at 25 fps), with the second one having approximately 10 times more videos than the first one. The first dataset, VoxCeleb1 is used for comparison with baselines and ablation studies, the researchers show the potential of their approach with the second dataset, VoxCeleb2. To conclude, researchers have presented a framework for meta-learning of adversarial generative models that can train highly realistic virtual talking heads in the form of deep generator networks. A handful of photographs (could be as little as one) is needed to create a new model, but the model that is trained on 32 images achieves perfect realism and personalization score in their user study (for 224p static images). The key limitations of the method are the mimics representation and the lack of landmark adaptation. The landmarks from a different person can lead to a noticeable personality mismatch. If someone wants to create “fake” puppeteering videos without such mismatch then, in that case, some landmark adaptation is needed. The paper further reads, “We note, however, that many applications do not require puppeteering a different person and instead only need the ability to drive one’s own talking head. For such scenario, our approach already provides a high-realism solution.” To know more about this news, check out the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. Samsung opens its AI based Bixby voice assistant to third-party developers Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 4551

article-image-g-suite-administrators-passwords-were-unhashed-for-14-years-notifies-google
Vincy Davis
22 May 2019
3 min read
Save for later

G Suite administrators' passwords were unhashed for 14 years, notifies Google

Vincy Davis
22 May 2019
3 min read
Today, Google notified its G Suite administrators that some of their passwords were being stored in an encrypted internal system unhashed, i.e., in plaintext, since 2005. Google also states that the error has been fixed and this issue had no effect on the free consumer Google accounts. In 2005, Google had provided G Suite domain administrators with tools to set and recover passwords. This tool enabled administrators to upload or manually set user passwords for their company’s users. This was made possible for helping onboard new users with their account information on their first day of work, and for account recovery. However, this action led to admin console storing a copy of the unhashed password. Google has made it clear that these unhashed passwords were stored in a secure encrypted infrastructure. Google is now working with enterprise administrators to ensure that the users reset their passwords. They are also conducting a thorough investigation and have assured users that no evidence of improper access or misuse of the affected passwords have been identified till now. Google has around 5 million users using G Suite. Out of an abundance of caution, the Google team will also reset accounts of those who have not done it themselves. Additionally, Google has also admitted to another mishap. In January 2019, while troubleshooting new G Suite customer sign-up flows, an accidentally stored subset of unhashed passwords was discovered. Google claims these unhashed passwords were stored for only 14 days and in a secure encrypted infrastructure. This issue has also been fixed and no evidence of improper access or misuse of the affected passwords have been found. In the blogpost, Suzanne Frey, VP of Engineering and Cloud Trust, has given a detailed account of how Google stores passwords for consumers & G Suite enterprise customers. Google is the latest company to have admitted storing sensitive data in plaintext. Two months ago, Facebook had admitted to have stored the passwords of hundreds of millions of its users in plain text, including the passwords of Facebook Lite, Facebook, and Instagram users. Read More: Facebook accepts exposing millions of user passwords in a plain text to its employees after security researcher publishes findings Last year, Twitter and GitHub also admitted to similar security lapses. https://twitter.com/TwitterSupport/status/992132808192634881 https://twitter.com/BleepinComputer/status/991443066992103426 Users are shocked that it took Google 14 long years to identify this error. Others are concerned if even a giant company like Google cannot secure its passwords in 2019, what can be expected from other companies. https://twitter.com/HackingDave/status/1131067167728984064 A user on Hacker News comments, “Google operates what is considered, by an overwhelming majority of expert opinion, one of the 3 best security teams in the industry, likely exceeding in so many ways the elite of some major world governments. And they can't reliably promise, at least not in 2019, never to accidentally durably log passwords. If they can't, who else can? What are we to do with this new data point? The issue here is meaningful, and it's useful to have a reminder that accidentally retaining plaintext passwords is a hazard of building customer identity features. But I think it's at least equally useful to get the level set on what engineering at scale can reasonably promise today.” To know more about this news in detail, head over to Google’s official blog. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model
Read more
  • 0
  • 0
  • 2094
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-wolfram-engine-is-now-free-for-developers
Vincy Davis
22 May 2019
3 min read
Save for later

Wolfram Engine is now free for developers

Vincy Davis
22 May 2019
3 min read
Yesterday in a blogpost, Stephen Wolfram posted about launching a free Wolfram Engine for developers. The Wolfram Engine runs on any standard platform like Linux, Mac, Windows, RasPi, and many more. It can be used directly with a script, or from a command line. The Wolfram Engine also has access to the whole Wolfram Knowledgebase by a free basic subscription to the Wolfram Cloud. “The Wolfram Engine is the heart of all our products.”, says  Wolfram. The Wolfram Engine implements the full Wolfram Language as a software component and can immediately be plugged into any standard software engineering stack. The Wolfram language is a powerful system used for interactive computing as well as for doing R&D, education and data science. It is also being increasingly used as a key component in building production software systems. The Wolfram language has 5000+ functions, including visualization, machine learning, numerics, image computation, and many more. It has lots of  real-world knowledge too, particularly in geo, medical, cultural, engineering, scientific, etc. The Wolfram language has been increasingly used inside large-scale software projects. Wolfram added, “Sometimes the whole project is built in Wolfram Language. Sometimes Wolfram Language is inserted to add some critical computational intelligence, perhaps even just in a corner of the project.” The free Wolfram Engine for developers will help make the Wolfram language available to any software developer. It will also help build systems that can take full advantage of its computational intelligence. Wolfram concludes the blogpost stating, “We’ve worked hard to make the Free Wolfram Engine for Developers as easy to use and deploy as possible.” Many developers have welcomed the free availability of Wolfram Engine. https://twitter.com/bc238dev/status/1130868201129107456 A user on Hacker News states, “I'm excited about this change. I wish it had happened sooner so it could have had more of an impact. It certainly put Wolfram Engine back on my radar.” Another user is planning to take advantage of this situation by “using Mathematica (and its GUI) on a Raspberry Pi to explore and figure out how to do what you want to do, but then actually run it in Wolfram Engine on a more powerful computer.” To know more details about the news, head over to Stephen Wolfram blog. Read More Software developer tops the 100 Best Jobs of 2019 list by U.S. News and World Report Key trends in software development in 2019: cloud native and the shrinking stack 18 people in tech every programmer and software engineer needs to follow in 2019
Read more
  • 0
  • 0
  • 2591

article-image-tor-browser-8-5-the-first-stable-version-for-android-is-now-available-on-google-play-store
Bhagyashree R
22 May 2019
2 min read
Save for later

Tor Browser 8.5, the first stable version for Android, is now available on Google Play Store!

Bhagyashree R
22 May 2019
2 min read
Yesterday, the Tor team announced the release of Tor Browser 8.5, which marks the first stable release for Android. Tor Browser 8.5 was also released for other platforms with more accessible security settings and a revamped look. https://twitter.com/torproject/status/1130891728444121089 The first alpha version of Tor Browser 8.5 for Android came out in September last year. After being in the alpha testing phase for almost 8 months, this version aims to provide phone users the same level of security and privacy as the desktop users enjoy. Announcing the release, the team wrote, “Tor Browser 8.5 is the first stable release for Android. Since we released the first alpha version in September, we've been hard at work making sure we can provide the protections users are already enjoying on the desktop to the Android platform.” The browser ensures security by preventing proxy bypasses. It comes with first-party isolation to protect users from cross-site tracking and fingerprinting defenses to prevent digital fingerprinting. Though the Android version was released with various security features, it does lacks some Desktop features that we will see coming in the subsequent releases. Across all the platforms, this version comes with improved security slider accessibility. Earlier it was behind the Torbutton menu, which made it difficult to access. Along with this change, the Tor Browser also comes with few cosmetic changes. The user interface is similar to that of Firefox’s Photon UI and also has redesigned logos. The team further shared that, the other most popular mobile operating systems, iOS will not be getting Tor Browser any time soon as it is too restrictive. Users can instead use the Onion Browser. Read also: Understand how to access the Dark Web with Tor Browser [Tutorial] You can download Tor Browser 8.5 from the Tor Browser download page and distribution directory. The Android version is also available on the Google Play Store. Read the full announcement on Tor’s official website. Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features Firefox 67 will come with faster and reliable JavaScript debugging tools Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs  
Read more
  • 0
  • 0
  • 2257

article-image-google-and-facebook-allegedly-pressured-and-arm-wrestled-eu-expert-group-to-soften-european-guidelines-for-fake-news-open-democracy-report
Fatema Patrawala
22 May 2019
6 min read
Save for later

Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report

Fatema Patrawala
22 May 2019
6 min read
Yesterday Open Democracy reported on how Investigate Europe has revealed, that the EU’s instruments against disinformation remained largely ineffective. A day before the European elections, tech giants, Facebook and Google have been alleged to sabotage the designing of EU regulation for fake news and disinformation. According to a new testimony which Investigate Europe collected from insiders, Google and Facebook pressured and “arm-wrestled” a group of experts to soften European guidelines on online disinformation and fake news. The EU’s expert group met last year as a response to the spread of fake news and disinformation seen in the Brexit referendum and in the US election of President Donald Trump in 2016. The task for the experts was to help prevent the spread of disinformation, particularly at the time of European parliamentary elections now. It was in March last year that the expert group’s report was published and then the same year in September the  EU Code of Practice on Disinformation was announced where the platforms had agreed to self-regulate following common standards. The European Union pushed platforms like Google, Facebook and Twitter to sign a voluntary Code of Practice. The tech companies did commit themselves to name their  advertising clients and to act against fake accounts, ie false identities on their platforms. They had also agreed to investigate spread of disinformation and fake news on the platforms. In addition, representatives from Facebook, Google and Twitter had also agreed to submit monthly reports to the EU Commissioners. "It's the first time in the world that companies have voluntarily agreed to self-regulatory measures to combat misinformation," the commission proclaimed. The expert group confirmed to Investigate Europe that Facebook and Google representatives undermined the work and opposed the proposals to be more transparent about their business models. During the group’s third meeting in March 2018, "There was heavy arm-wrestling in the corridors from the platforms to conditionalise the other experts", says a member of the group, under the condition of anonymity. Another member Monique Goyens – director-general of BEUC, says, "We were blackmailed,". Goyens further added that, “We wanted to know whether the platforms were abusing their market power,". In response to this Facebook’s chief lobbyist, Richard Allan said to her: "We are happy to make our contribution, but if you go in that direction, we will be controversial." He also threatened the expert group members saying that if they did not stop talking about competition tools, Facebook would stop its support for journalistic and academic projects. Google influenced and bribed the expert group members Goyens added that the Google did not have to fight too hard as they had influenced the group member in other ways too. She added that 10 organisations with representatives in the expert group received money from Google. One of them was Reuters Institute for the Study of Journalism, at the University of Oxford. By 2020, the institute will have received almost €10m from Google to pay for its annual Digital News Report. A number of other organisations represented on the group did also receive funding from the Google Digital News Initiative, including the Poynter Institute and First Draft News. Ska Keller, the German MEP said, “It's been known for some time that Google, Facebook and other tech companies give money to academics and journalists. There is a problem because they can use the threat of stopping this funding if these academics or journalists criticise them in any reporting they do." Code of practice was not delivered as strongly as it was laid down A year later, the code of conduct with the platforms is no more than voluntary. The platforms agreed to take stronger action against fake accounts, to give preference to trustworthy sources and to make it transparent to their users but the progress has been limited. The results of code of practice criticism came from a ‘Sounding Board’ that was convened by the European Commission to track the proposals drawn up in response to the expert group’s report. The Sounding Board, which included representatives from media, civil society and academia, said that the code of practice “contains no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor process, and no compliance or enforcement tool. “It is by no means self-regulation, and therefore the platforms, despite their efforts, have not delivered a code of practice. More systematic information is needed for the Commission to assess the efforts deployed by the online platforms to scrutinise the placement of ads and to better understand the effectiveness of the actions taken against bots and fake accounts,” four commissioners said in a statement issued in March. Goyens concluded saying, “The code of conduct was total crap. It was just a fig leaf. The whole thing was a rotten exercise. It was just taking more time, extending more time.” However there are reactions on Twitter about the story that it might be a disinformation in itself. A twitter user said that Facebook and Google opposed sharing their algorithms for ranking content, and how this would help EU in fighting disinformation is unknown. https://twitter.com/Je5usaurus_Rex/status/1130757849926197249 While discussions on Hacker News revolve around the context of fake news with the history of propaganda and censorship, one of the user commented, “The thing that makes me very concerned is the lack of context in these discussions of fake news of the history and continuing use of government propaganda. I hope young people who are familiar with "fake news" but not necessarily as familiar with the history of propaganda and censorship will study that history. The big issue is there is enthusiasm for censorship, and the problem with censorship is who gets to decide what is real information and what is fake. The interests with the most power will have more control over information as censorship increases. Because the same power that is supposedly only used to suppress propaganda from some other country is used to suppress internal dissent or criticism. This is actually very dangerous for multiple reasons. One big reason is that propaganda (internal to the country, i.e. by the same people who will be deciding what is fake news) is usually critical in terms of jump-starting and maintaining enthusiasm for wars.” Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing
Read more
  • 0
  • 0
  • 2186

article-image-microsoft-introduces-service-mesh-interface-smi-for-interoperability-across-different-service-mesh-technologies
Amrata Joshi
22 May 2019
2 min read
Save for later

Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Microsoft launched Service Mesh Interface (SMI) that defines a set of common and portable APIs. It is an open project that started in partnership with Microsoft, HashiCorp, Linkerd, Solo.io, Kinvolk, and Weaveworks and with support coming from Aspen Mesh, Docker, Canonical, Pivotal, Rancher, Red Hat, and VMware. SMI provides developers with interoperability across different service mesh technologies including Linkerd, Istio, and Consul Connect. The need for service mesh technology Previously, not much attention was given to the network architecture and organizations believed in making applications smarter instead. But now while dealing with micro-services, containers, and orchestration systems like Kubernetes, the engineering teams face issues with securing, managing and monitoring a number of network endpoints. The service mesh technology has a solution to this problem as it makes the network smarter. It pushes service this logic into the network, controlled by a separate set of management APIs, and frees the engineers from teaching all the services to encrypt sessions, authorize clients, emit reasonable telemetry. Key features of Service Mesh Interface(SMI) It provides a standard interface for meshes on Kubernetes. It also comes with a basic feature set for common mesh use cases. It provides lexibility to support new mesh capabilities. It applies policies like identity and transport encryption across services. It also captures key metrics like error rate and latency between services. Service Mesh Interface shifts and weighs traffic between different services. William Morgan, Linkerd maintainer, said, “SMI is a big step forward for Linkerd’s goal of democratizing the service mesh, and we’re excited to make Linkerd’s simplicity and performance available to even more Kubernetes users.” Idit Levine, Founder and CEO of Solo.io, said, “The standardization of interfaces are crucial to ensuring a great end user experience across technologies and for ecosystem collaboration. With that spirit, we are excited to work with Microsoft and others on the SMI specification and have already delivered the first reference implementations with the Service Mesh Hub and SuperGloo project.” To know more about this news, check out Microsoft’s blog post. Microsoft officially releases Microsoft Edge canary builds for macOS users Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered  
Read more
  • 0
  • 0
  • 3400
article-image-facebook-releases-pythia-a-deep-learning-framework-for-vision-and-language-multimodal-research
Amrata Joshi
22 May 2019
2 min read
Save for later

Facebook releases Pythia, a deep learning framework for vision and language multimodal research

Amrata Joshi
22 May 2019
2 min read
Yesterday, the team at Facebook released Pythia, a deep learning framework that supports multitasking in the vision and language multimodal research. Pythia is built on the open-source PyTorch framework and enables researchers to easily build, reproduce, and benchmark AI models. https://twitter.com/facebookai/status/1130888764945907712 It is designed for vision and language tasks, such as answering questions that are related to visual data and automatically generates image captions. This framework also incorporates elements of Facebook’s winning entries in recent AI competitions including the VQA Challenge 2018 and Vizwiz Challenge 2018. Features of Pythia Reference implementations: Pythia references implementations to show how previous state-of-the-art models achieved related benchmark results. Performance gauging: It also helps in gauging the performance of new models. Multitasking: Pythia supports multitasking and distributed training. Datasets: It also includes support for various datasets built-in including VizWiz, VQA,TextVQA and VisualDialog. Customization: Pythia features custom losses, metrics, scheduling, optimizers, tensorboard as per the needs of the customers. Unopinionated: Pythia is unopinionated about the dataset and model implementations that are built on top of it. The goal of the team behind Pythia is to accelerate the AI models and their results and further make it easier for the AI community to build on, and benchmark against, successful systems. The team hopes that Pythia will also help researchers to develop adaptive AI that synthesizes multiple kinds of understanding into a more context-based, multimodal understanding. The team also plans to continue adding tools, data sets, tasks, and reference models. To know more about this news, check out the official Facebook announcement. Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data Facebook bans six toxic extremist accounts and a conspiracy theory organization  
Read more
  • 0
  • 0
  • 2614

article-image-mozilla-makes-firefox-67-faster-than-ever-by-deprioritizing-least-commonly-used-features
Bhagyashree R
22 May 2019
3 min read
Save for later

Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features

Bhagyashree R
22 May 2019
3 min read
Yesterday, Mozilla announced the release of Firefox 67. For this version, the main focus of the Mozilla community has been to make Firefox “faster than ever” and also bring more privacy controls to the users. The updates include deprioritizing least commonly used features, suspending unused tabs, feature for blocking fingerprinting and cryptomining, and more. https://www.youtube.com/watch?v=NzqJ09_cn28 New updates in Firefox 67 Performing tasks at an optimal time by deprioritizing least commonly used features To make the browsing experience better, Mozilla identified the least commonly used features that could be delayed until the page is loaded. The updates include delaying setTimeout, a method used in JavaScript for timing events, to give more priority to executing scripts for things that users want to see first. The idea of delaying setTimeout for certain features has helped make the main scripts of sites like Instagram, Amazon, and Google searches execute 40-80% faster. This boost in performance is also because the browser now scans for alternative style sheets once the page is loaded and loads the auto-fill module only if there is a form to complete. Suspending unused tabs to prevent computer slow down We are all guilty of opening a number of tabs, which eventually slows down our computers. With this release, Firefox will be able to identify whether your memory is less than 400 MB and suspend unused tabs. If you want to visit the web page again, you just need to click on the tab and it will be reloaded where you left off. Fighting against online tracking by blocking known cryptominers and fingerprinters Last year in August, Mozilla announced that it will be introducing a series of features in Firefox to prevent online tracking. Living up to that promise, it has introduced a new feature through which you can disable fingerprinting and cryptomining. Browser fingerprinting refers to the technique of collecting various device-specific information through a web browser to build a device fingerprint for better identification. Cryptomining is the method of generating cryptocurrency by running a script in someone else’s PC, which leads to slowing down your computer and draining your battery. To use this feature, navigate to Preferences| Privacy & Security| Content Blocking. Then select Custom and check “Cryptominers” and “Fingerprinters” so that they are both blocked. Another way for enabling this feature is by clicking on the “i” icon in the address bar and under Content Blocking click on the Custom gear at the right side. Source: Mozilla Private browsing gets the convenience of normal browsing Private browsing prevents websites from tracking your online activity to some extent by automatically erasing your browsing information as soon as the session is over. Along with better online privacy, users will now be able to enjoy some of the convenience that you get in a typical Firefox experience. You will be able to access saved passwords and enable/disable your web extensions. Along with these improved user-facing features, this release also comes with faster and reliable JavaScript debugging tools for web developers. Visit the Mozilla Blog to know more in detail. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 2241

article-image-12000-unsecured-mongodb-databases-deleted-by-unistellar-attackers
Vincy Davis
21 May 2019
3 min read
Save for later

12,000+ unsecured MongoDB databases deleted by Unistellar attackers

Vincy Davis
21 May 2019
3 min read
Over the last three weeks, more than 12,000 unsecured MongoDB databases have been deleted. The cyber-extortionist have left only an email contact, most likely to negotiate the terms of data recovery. Attackers looking for exposed database servers use BinaryEdge or Shodan search engines to delete them and usually demand a ransom for their 'restoration services'. MongoDB is not new to such attacks, previously in September 2017 MongoDB databases were hacked, for ransom. Also, earlier this month, Security Discovery researcher Bob Diachenko found an unprotected MongoDB database which exposed 275M personal records of Indian citizens. The record contained a personal detailed identifiable information such as name, gender, date of birth, email, mobile phone number, and many more. This information was left exposed and unprotected on the Internet for more than two weeks. https://twitter.com/MayhemDayOne/status/1126151393927102464 The latest attack on MongoDB database was found out by Sanyam Jain, an independent security researcher. Sanyam first noticed the attacks on April 24, when he initially discovered a wiped MongoDB database. Instead of finding the huge quantities of leaked data, he found a note stating: “Restore ? Contact : [email protected]”. It was later discovered that the cyber-extortionists have left behind ransom notes asking the victims to get in touch, if they want to restore their data. Two email addresses were provided for the same: [email protected] or [email protected]. This method to find and wipe databases in such large numbers is expected to be automated by the attackers. The script or program used to connect to the publicly accessible MongoDB databases is also configured to indiscriminately delete every unsecured MongoDB it can find and later add it to the ransom table. In a statement to Bleeping Computer, Sanyam Jain says, “the Unistellar attackers seem to have created restore points to be able to restore the databases they deleted” Bleeping Computer have stated that there is no way to track if the victims have been paying for the databases to be restored because Unistellar only provides an email to be contacted and no cryptocurrency address is provided. Bleeping Computer also tried to get in touch with Unistellar to confirm if the wiped MongoDB databases are indeed backed up and if any victim have already paid for their "restoration services" but got no response. How to secure MongoDB databases MongoDB databases are remotely accessible and access to them is not properly secured. These frequent attacks highlight the need for an effective protection of data. This is possible by following fairly simple steps designed to properly secure one’s database. Users should take the simple preventive measure of enabling authentication and not allowing the databases to be remotely accessible. MongoDB has also provided a detailed manual for Security. It includes various features, such as authentication, access control, encryption, to secure a MongoDB deployments. There’s also a Security Checklist for administrators to protect the MongoDB deployment. The list discusses the proper way of enforcing authentication, enabling role-based access control, encrypt communication, limiting network exposure and many more factors for effectively securing MongoDB databases. To know more about this news in detail, head over to Bleeping Computer’s complete coverage. MongoDB is going to acquire Realm, the mobile database management system, for $39 million MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL
Read more
  • 0
  • 0
  • 3193
article-image-amazon-resists-public-pressure-to-re-assess-its-facial-recognition-business-failed-to-act-responsibly-says-aclu
Fatema Patrawala
21 May 2019
6 min read
Save for later

Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU

Fatema Patrawala
21 May 2019
6 min read
Amazon’s facial recognition business will mark a significant setback this week if privacy and civil liberties advocates and shareholders get their way. Yesterday, ACLU wrote in an open letter to Amazon shareholders that, Amazon shareholders hold the power to stop Amazon’s deployment of its invasive face surveillance technology, Rekognition. https://twitter.com/Matt_Cagle/status/1130586385595789312 This year in January, shareholders proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. The technology was found to be biased and inaccurate and is regarded as an enabler of racial discrimination of minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far, and Amazon has also pitched it to Immigration and Customs Enforcement. The first proposal asked the Board of Directors to stop sales of “Rekognition” — Amazon’s face surveillance technology — to the government. The second demands an independent review of its human and civil rights impacts, particularly for people of color, immigrants, and activists, who have always been disproportionately impacted by surveillance. ACLU in its letter backs the measures and calls on for shareholders to pass these resolutions. Amazon’s non responsiveness and its failure to act ACLU’s letter will be presented at Amazon’s annual shareholder meeting on Wednesday. There ACLU plans to accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government. “Amazon has stayed the course,” states the letter. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers, let alone address them”. “This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” as per the letter. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.” “As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added. “Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read. Facial recognition has become one of the most hot-button topics in privacy in years. Recently Amnesty International in Canada raised serious privacy issues on the Google’s Sidewalk Labs project in Toronto. There was concerns around the project’s potential to normalize mass surveillance and is a threat to human rights. While Amazon Rekognition, a cloud-based facial recognition system, remains in its infancy, it is yet one of the most prominent systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested. This latest move is a concerted effort by dozens of shareholders and investment firms, tech experts and academics, privacy and rights groups and organizations who decry the use of facial recognition technology. In the month of March, top AI researchers including this year’s winner of the Turing Award, Yoshua Bengio also issued a joint statement calling on Amazon Web Services to stop all sales of its Rekognition facial-recognition technology to law enforcement agencies. There has been a pushback even from government. Several municipalities have rolled out surveillance-curtailing laws and ordinances this year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition. In April, the Oakland Privacy Advisory Commission released 2 key documents, an initiative to protect Oaklanders’ privacy namely, Proposed ban on facial recognition and City of Oakland Privacy Principles. “Amazon leadership has failed to recognize these issues,” said the ACLU’s letter. “This failure will lead to real-life harm.” The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.” Miles Brundage, a Research scientist at OpenAI commented that every facial recognition expert agrees Amazon is behaving irresponsible and yet it is continuing. He says Amazon is making a big misguided bet on first mover advantages in this area. https://twitter.com/Miles_Brundage/status/1130504230110941184 Amazon has pushed back against the claims by arguing that the technology is accurate and has largely criticized how the ACLU conducted its tests using Rekognition. https://twitter.com/natashanyt/status/1130470342974222336 Amazon states, “Machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly. We should not throw away the oven because the temperature could be set wrong and burn the pizza. It is a very reasonable idea, however, for the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.” It is interesting that Amazon compares facial recognition tech with an oven to justify its decision to sell the tech while transferring the accountability to the buyer and user of the tech. Unlike an oven which has a narrow range of applications and clear liabilities for misuse of such a device, to say willfully cause harm to another person, facial recognition applications are wide and varied and no clear regulations or accountability boundaries exist for harms caused using it. In the absence of clear laws and regulations, enabling state surveillance and control with facial recognition products that are used by law enforcement and governing bodies, can set a dangerous precedent and curtail individual freedoms and exacerbate socio-economic inequalities. Hence, it is not only important but essential for civil liberty groups like ACLU to urge Amazon and its shareholders to act quick. Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake
Read more
  • 0
  • 0
  • 1806

article-image-gitlab-goes-multicloud-using-crossplane-with-kubectl
Savia Lobo
21 May 2019
3 min read
Save for later

GitLab goes multicloud using Crossplane with kubectl

Savia Lobo
21 May 2019
3 min read
GitLab announced yesterday that it's being deployed across multiple clouds via Crossplane, an open source multi-cloud control plane sponsored by Upbound. Yesterday, the Crossplane community also demonstrated the entire process of GitLab deployment across multi-cloud. During early December, last year, GitLab announced it had been chosen as the first complex app to be deployed on Crossplane. Crossplane follows established Kubernetes patterns such as persistent volume claims for supporting a clean separation of concerns between application and infrastructure owners. It also provides a self-service model for managed services entirely within the Kubernetes API. With Crossplanes, real-world application deployments from kubectl are easily accessible with enhanced support for composing external fully-managed services including Redis, PostgreSQL, and object storage. “We’ve been working with GitLab to validate our approach and are proud to unveil the deployment of GitLab to multiple clouds entirely with kubectl using Crossplane, including the use of fully-managed services offered by the respective cloud providers”, the official Crossplane blog mentions. Deployment of GitLab with external managed services using Kubectl Crossplane extends the Kubernetes API by adding resource claims and resource classes to support composability of managed service dependencies in Kubernetes, similar to persistent volume claims and storage classes. Crossplane can be easily added to any existing Kubernetes cluster and neatly layers on top of clusters provisioned by Anthos, EKS, AKS, and OpenShift. Cluster administrators install Crossplane on a Kubernetes cluster, set cloud credentials, and specify which managed services they want to make available for self-service provisioning within the cluster. Policies guide binding to specific managed service offerings configured by the cluster administrator. With this, application owners can consume and compose these managed services on-demand with the familiar Kubernetes patterns, without having to know about the infrastructure details or having to manage credentials. For production deployments, GitLab recommends using external managed services for Redis, PostgreSQL, and object storage. Crossplane supports composability of both out-of-cluster public cloud managed services (GCP, AWS, Azure) and in-cluster managed services like those provided by Rook, a storage orchestrator for in-cluster cloud-native storage including Ceph, Minio, and Cassandra. Bassam Tabbara, CEO of Upbound and maintainer on Crossplane said, “We’re showing a real-world example of the future of multi-cloud today. GitLab is a production application that relies on multiple fully-managed services, so by abstracting these services and integrating them with the declarative Kubernetes API, we are demonstrating the ability to standardize on a single declarative API to manage it all.” To know more about Crossplane in detail and also the steps to deploy GitLab to multiple clouds using Crossplane, head over to Crossplane’s official website. Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack GitLab 11.10 releases with enhanced operations dashboard, pipelines for merged results and much more! Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 2317