Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-netflix-security-engineers-report-several-tcp-networking-vulnerabilities-in-freebsd-and-linux-kernels
Bhagyashree R
18 Jun 2019
3 min read
Save for later

Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels

Bhagyashree R
18 Jun 2019
3 min read
Yesterday, the security engineers at Netflix reported several TCP networking vulnerabilities in FreeBSD and Linux kernels. Out of these vulnerabilities, the most serious one is called “SACK Panic” that allows a remote attacker to trigger a kernel panic on recent Linux kernels. Details on the TCP networking vulnerabilities Netflix security engineers found four vulnerabilities in total. These were specifically related to the maximum segment size (MSS) and TCP Selective Acknowledgement (SACK) capabilities. MSS is a parameter in the TCP header of a packet that specifies the total amount of data a computer can receive in a single TCP segment. SACK is a mechanism that enables the data receiver to inform the sender about all the segments that have arrived successfully. Soon after, Red Hat also listed the vulnerabilities, background, and patches on their website and credited Netflix for reporting them. According to Red Hat, the extent of the impact of these vulnerabilities is limited to denial of service. “No privilege escalation or information leak is currently suspected,” Red Hat wrote in its post. Following are the vulnerabilities that were reported: SACK Panic (CVE-2019-11477) Sack Panic is the most severe vulnerability of all, that can be exploited by an attacker to induce an integer overflow by sending a crafted sequence of SACKs on a TCP connection with small MSS value. This can lead to a kernel panic that makes it difficult for the operating system to recover back to its normal state. This forces a restart and hence causes a denial of service attack. This vulnerability was found in Linux 2.6.29 or later versions. SACK Slowness (CVE-2019-11478 and CVE-2019-5599) The TCP retransmission queue in Linux kernels and the Rack send map in FreeBSD can be fragmented by sending a crafted sequence of SACKs. The attacker will then be able to exploit this fragmented queue to cause “an expensive linked-list walk for subsequent SACKs received” for that particular TCP connection. This vulnerability was found in Linux 4.15 or previous versions and FreeBSD 12 using the RACK TCP Stack Excess Resource Consumption Due to Low MSS Values (CVE-2019-11479) A Linux kernel can be forced by an attacker to divide its responses into multiple TCP segments accommodating 8 bytes of data. Sending the same amount of data will now require more bandwidth and will also consume additional resources like CPU and NIC processing power. This vulnerability was found in all Linux versions. Next steps The Netflix team has also mentioned the patches and workaround against each vulnerability in the official report. Red Hat has recommended two options to mitigate the CVE-2019-11477 and CVE-2019-11478 vulnerabilities: Disabling the vulnerable component Using iptables to drop connections with an MSS size that is able to exploit the vulnerability. Red Hat will be making a ‘kpatch’ available for customers running supported versions of Red Hat Enterprise Linux 7 or greater. Red Hat customers using the affected versions are recommended to update them as soon as Red Hat makes the errata available. Additionally, they have also provided an Ansible playbook, ‘disable_tcpsack_mitigate.yml’, which will disable selective acknowledgments and make the change permanent. More information about the mitigation steps is available on Red Hat’s official website. NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems Over 19 years of ANU(Australian National University) students’ and staff data breached PyPI announces 2FA for securing Python package downloads
Read more
  • 0
  • 0
  • 3298

article-image-docker-and-microsoft-collaborate-over-wsl-2-future-of-docker-desktop-for-windows-is-near
Amrata Joshi
18 Jun 2019
5 min read
Save for later

Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near

Amrata Joshi
18 Jun 2019
5 min read
WSL was a great effort towards emulating a Linux Kernel on top of Windows. But due to certain differences between Windows and Linux, it was quite impossible to run the Docker Engine and Kubernetes directly inside WSL. So, the Docker Desktop developed an alternative solution with the help of Hyper-V VMs and LinuxKit to achieve the seamless integration. On 16th June, Docker announced WSL 2 with a major architecture change where the company will provide a real Linux Kernel running inside a lightweight VM instead of emulation. This approach is architecturally similar to LinuxKit and Hyper-V but  WSL 2 has an additional benefit that it is more lightweight and tightly integrated with Windows. Even the Docker daemon runs properly on it with great performance. The team further announced that they are working on new version of Docker Desktop that would leverage WSL 2 and the public preview will be expected in July. The official blog reads, “We are very excited about this technology, and we are happy to announce that we are working on a new version of Docker Desktop leveraging WSL 2, with a public preview in July. It will make the Docker experience for developing with containers even greater, unlock new capabilities, and because WSL 2 works on Windows 10 Home edition, so will Docker Desktop.” In context with integration of Microsoft the blog reads, “As part of our shared effort to make Docker Desktop the best way to use Docker on Windows, Microsoft gave us early builds of WSL 2 so that we could evaluate the technology, see how it fits with our product, and share feedback about what is missing or broken. We started prototyping different approaches and we are now ready to share a little bit about what is coming in the next few months.” The future of Docker Desktop will have WSL 2 The team will replace the Hyper-V VM by a WSL 2 integration package. The package will offer the same features as the current Docker Desktop including automatic updates, transparent HTTP proxy configuration, VM: Kubernetes 1-click setup, access to the daemon from Windows, etc. This package will contain both the server-side components that are required to run Docker and Kubernetes and the CLI tools to interact with those components within WSL. WSL 2 will enable seamless integration with Linux With WSL 2 integration, users will experience seamless integration with Windows, but even Linux programs that are running inside WSL will be able to do the same. This creates a huge impact for developers that are working on projects related to the Linux environment, or with a build process for Linux. Now there won’t be a need for maintaining both Linux and Windows build scripts. For example, a developer at Docker can now work on the Linux Docker daemon on Windows, using the same set of tools and scripts as a developer on a Linux machine. The bind mounts from WSL will now support inotify events (inotify is a Linux kernel subsystem) and will have almost identical I/O performance as on a native Linux machine. This will solve one of the major Docker Desktop issues with I/O-heavy toolchains. This feature will benefit NodeJS, PHP and other web development tools. Improved performance and reduced memory consumption The VM has been setup to use dynamic memory allocation and schedule work on all the Host CPUs. It will be consuming lesser memory which would be in the limit of what the host can provide. Docker Desktop will leverage this for improving its resource consumption and use CPU and memory as per its needs. The CPU/Memory intensive tasks such as building a container will also run much faster. Leveraging WSL 2 Docker desktop will support bind mount One of the major problems that the users have with Docker Desktop is the reliability of Windows file bind mounts. The current implementation is dependent on Samba Windows service, which could be deactivated, blocked by enterprise GPOs or even blocked by 3rd party firewalls etc. But Docker Desktop with WSL 2 solves these issues by leveraging WSL features to implement the bind mounts of Windows files.   Few users seem to be unhappy with this news, one of them commented on HackerNews, “So, I think the main sticking point here is the lock-in of Hyper-V. By making a new popular feature completely dependent on a technology that explicitly disables the use of competitive hypervisors, they're giving with one hand and taking with the other. If I was on VM-Ware's executive team, I'd be seriously thinking about filing an anti-trust complaint and the open source community should be thinking about whether submarining virtualbox is worth what Microsoft is doing here.” Others think that WSL 2 is a full Linux kernel that runs in Hyper-V. Another comment reads, “WSL 2 is a full Linux kernel running in Hyper-V rather than an emulation layer on top of NT.” To know more about this news, check out the official post by Docker. How to push Docker images to AWS’ Elastic Container Registry(ECR) [Tutorial] All Docker versions are now vulnerable to a symlink race attack Docker announces collaboration with Microsoft’s .NET at DockerCon 2019      
Read more
  • 0
  • 0
  • 3176

article-image-pull-panda-is-now-a-part-of-github-code-review-workflows-now-get-better
Amrata Joshi
18 Jun 2019
4 min read
Save for later

Pull Panda is now a part of GitHub; code review workflows now get better!

Amrata Joshi
18 Jun 2019
4 min read
Yesterday, the team at GitHub announced that they have acquired Pull Panda for an undisclosed amount, to help teams create more efficient and effective code review workflows on GitHub. https://twitter.com/natfriedman/status/1140666428745342976 Pull Panda helps thousands of teams to work together on the code and further helps in improving their process by combining three new apps including Pull Reminders, Pull Analytics, and Pull Assigner. Pull Reminders: Users can get a prompt in Slack whenever a collaborator needs a review. It facilitates automatic reminders that ensures the pull requests aren’t missed. Pull Analytics: Users can now get real-time insight and make data-driven improvements for creating a more transparent and accountable culture. Pull Assigner: Users can automatically distribute code across their team such that no one gets overloaded and knowledge could be spread around. Pull Panda helps the team to ship faster and gain insight into bottlenecks in the process. Abi Noda, the founder of Pull Panda highlighted the major reasons for starting Pull Panda. According to him, there were two major pain points, the first one was that on fast moving teams, usually pull requests are forgotten which causes delays in the code reviews and eventually delays in shipping new features to the customers. Abi Noda stated in a video, “I started Pull Panda to solve two major pain points that I had as an engineer and manager at several different companies. The first problem was that on fast moving teams, pull requests easily are forgotten about and often slip through the cracks. This leads to frustrating delays in code reviews and also means it takes longer to actually ship new features to your customers.” https://youtu.be/RtZdbZiPeK8 The team built Pull Reminders which is a GitHub app that automatically notifies the team about their code reviews, to solve the above mentioned problem. The second problem was that it was difficult to measure and understand the team's development process for identifying bottlenecks. To solve this issue, the team built Pull Analytics to provide real-time insights into the software development process. It also highlights the current code review workload across the team such that the team knows who is overloaded and who might be available. Also, a lot of customers have discovered that the majority of their code reviews were done by the same set of people on the team. For solving this problem,  the team built Pull Assigner that offers two algorithms for automatically assigning reviewers. First is the Load Balance, which equalizes the number of reviews so everyone on the team does the same number of reviews. The second one is the round robin algorithm that randomly assigns additional reviewers such that knowledge can be spread across the team. Nat Friedman, CEO at GitHub said, “We'll be integrating everything Abi showed you directly into GitHub over the coming months. But if you're impatient, and you want to get started now, I'm happy to announce that all three of the Pull Panda products are available for free in the GitHub marketplace starting today. So we hope you enjoy using Pull Panda and we look forward to your feedback. Goodbye. It's over.” Pull Panda will no longer offer the Enterprise plan. Existing customers of Enterprise plans can continue to use their on-premises offering. All paid subscriptions have been converted to free subscriptions. New users can install Pull Panda for their organizations for free at our website or GitHub Marketplace. The official GitHub blog post reads, “We plan to integrate these features into GitHub but hope you’ll start benefiting from them right away. We’d love to hear what you think as we continue to improve how developers work together on GitHub.” To know more about this news, check out GitHub’s post. GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise          
Read more
  • 0
  • 0
  • 5023

article-image-how-genius-used-embedded-hidden-morse-code-in-lyrics-to-catch-plagiarism-in-google-search-results
Fatema Patrawala
18 Jun 2019
3 min read
Save for later

How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results

Fatema Patrawala
18 Jun 2019
3 min read
Have you ever noticed that when you google lyrics of a particular song, Google displays them on its Search results card all along? A lyrics website Genius Media Group Inc. has accused Google of stealing lyrics from its site and reposting them in search results without permission. Additionally Genius claims to have caught Google “red handed” with the help of a Morse code embedded in their lyrics. On 16th June, the Wall Street Journal reported that Genius’ web traffic has dropped in recent years as Google has posted lyrics on its search results page in “information boxes” instead of routing users to lyric sites like Genius. In March, 62 percent of mobile searches on Google did not result in a click-through to another site. https://twitter.com/WSJ/status/1140201102102732800 Companies like Genius and other such lyrics website depend on search engines like Google to send music lovers to the website who stock hard-to-decipher lyrics of hip-hop songs and other pop hits. While Google posting song lyrics themselves is not a crime, Genius claims that Google has been lifting the song lyrics directly from Genius without permission and reposting them on the search result page. They have also shown evidence by inserting two forms of apostrophes embedded in Genius-housed lyrics. The company started to collect proof in 2016, the team at Genius positioned both “straight” and “curly” apostrophes in their lyrics. So when the apostrophes were converted into dots and dashes like Morse code, it spelled out the words “Red Handed.” Genius added that, using these apostrophes, they found over 100 instances of Google using Genius’ own lyrics in the Google search results. Check out the below video posted by WSJ to see how Genius caught Google copying the lyrics from its website: “Over the last two years, we’ve shown Google irrefutable evidence again and again that they are displaying lyrics copied from Genius,” Genius’s chief strategy officer Ben Gross told the Wall Street Journal. “We noticed that Google’s lyrics matched our lyrics down to the character.” The Wall Street Journal confirmed Genius’ accusations by matching the results of a set of randomly chosen three songs from the list of 100 instances. The songs included Alessia Cara’s “Not Today” – as well as Genius’ lyrics for Desiigner’s near-indecipherable “Panda,” which the rapper himself submitted the lyrics to the site. According to the New York Post, Google has denied the accusations through their partnership with LyricFind, which provides the search engine with lyrics through a deal with music publishers. “We take data quality and creator rights very seriously and hold our licensing partners accountable to the terms of our agreement,” Google said. Moreover, Google issued a second statement to say it’s investigating the issues and would terminate its agreements with partners that aren’t “upholding good practices.” “We do not source lyrics from Genius,” LyricFind Chief Executive Darryl Ballantyne said. Canva faced security breach, 139 million users data hacked: ZDNet reports Microsoft open sources SPTAG algorithm to make Bing smarter! Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 3469

article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 2774

article-image-developers-can-now-incorporate-unity-features-into-native-ios-and-android-apps
Sugandha Lahoti
18 Jun 2019
2 min read
Save for later

Developers can now incorporate Unity features into native iOS and Android apps

Sugandha Lahoti
18 Jun 2019
2 min read
Yesterday, Unity made an update stating that from Unity 2019.3.a2 onwards, Android and iOS developers will be able to incorporate Unity features into their apps and games. Developers will be able to integrate the Unity runtime components and their content (augmented reality, 3D/2D real-time rendering, 2D mini-games, and more)  into a native platform project so as to use Unity as a library. “We know there are times when developers using native platform technologies (like Android/Java and iOS/Objective C) want to include features powered by Unity in their apps and games,” said J.C. Cimetiere, senior technical product manager for mobile platforms, in a blog post. How it works The mobile app build process overall is still the same. Unity creates the iOS Xcode and Android Gradle projects. However, to enable this feature, Unity team has modified the structure of the generated iOS Xcode and Android Gradle projects as follows: A library part – iOS framework and Android Archive (AAR) file – that includes all source files and plugins A thin launcher part that includes app representation data and runs the library part They have also released step-by-step instructions on how to integrate Unity as a library on iOS and Android, including basic sample projects. Currently, Unity as a Library supports full-screen rendering only. For now, rendering on only a part of the screen is not supported. Also loading more than one instance of the Unity runtime is not supported. Developers need to adapt third-party plugins (native or managed) for them to work properly.   Unity hopes that this integration will boost AR marketing by helping brands and creative agencies easily insert AR directly into their native mobile apps. Unity Editor will now officially support Linux Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 5505
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-machine-learning-experts-on-how-we-can-use-machine-learning-to-mitigate-and-adapt-to-the-changing-climate
Bhagyashree R
18 Jun 2019
5 min read
Save for later

Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate

Bhagyashree R
18 Jun 2019
5 min read
Last week, a team of machine learning experts published a paper titled “Tackling Climate Change with Machine Learning”. The paper highlights how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. https://twitter.com/hardmaru/status/1139340463486320640 Climate change and its consequences are becoming more apparent to us day by day. And, one of the most significant ones is global warming, which is mainly caused by the emission of greenhouse gases. The report suggests that we can mitigate this problem by making changes to the existing electricity systems, transportation, buildings, industry, and land use. For adapting to the changing climate we need climate modeling, risk prediction, and planning for resilience and disaster management. This 54-page report lists various steps involving machine learning that can help us adapt and mitigate the problem of greenhouse gas emissions. In this article, we look at how machine learning and deep learning can be used for reducing greenhouse gas emissions from electricity systems: Electricity systems A quarter of human-caused greenhouse gas emissions come from electricity systems. To minimize this we need to switch to low-carbon electricity sources. Additionally, we should also take steps to reduce emissions from existing carbon-emitting power plants. There are two types of low-carbon electricity sources: variable and controllable: Variable sources Variable sources are those that fluctuate based on external factors, for instance, the energy produced by solar panels depend on the sunlight. Power generation and demand forecasting Though ML and deep learning methods have been applied to power generation and demand forecasting previously, it was done using domain-agnostic techniques. For instance, using clustering techniques on households or game theory, optimization, regression, or online learning to predict disaggregated quantities from aggregate electricity signals. This study suggests that future ML algorithms should incorporate domain-specific insights. They should be created using the innovations in climate modeling and weather forecasting and in hybrid-plus-ML modeling techniques. These techniques will help in improving both short and medium-term forecasts. ML models can be used to directly optimize for system goals. Improving scheduling and flexible demand ML can play an important role in improving the existing centralized process of scheduling and dispatching by speeding up power system optimization problems. It can be used to fit fast function approximators to existing optimization problems or provide good starting points for optimization. Dynamic scheduling and safe reinforcement learning can also be used to balance the electric grid in real time to accommodate variable generation or demand. ML or other simpler techniques can enable flexible demand by making storage and smart devices automatically respond to electricity prices. To provide appropriate signals for flexible demand, system operators can design electricity prices based on, for example, forecasts of variable electricity or grid emissions. Accelerated science for materials Many scientists are working to introduce new materials that are capable of storing or harnessing energy from variable natural resources more efficiently. For instance, solar fuels are synthetic fuels produced from sunlight or solar heat. It can capture solar energy when the sun is up and then store this energy for later use. However, coming up with new materials can prove to be very slow and imprecise. There are times when human experts do not understand the physics behind these materials and have to manually apply heuristics to understand a proposed material’s physical properties. ML techniques can prove to be helpful in such cases. They can be used to automate this process by combining “heuristics with experimental data, physics, and reasoning to apply and even extend existing physical knowledge.” Controllable sources Controllable sources can be turned on and off, for instance, nuclear or geothermal plants. Nuclear power plants Nuclear power plants are very important to meet climate change goals. However, they do pose some really significant challenges including public safety, waste disposal, slow technological learning, and high costs. ML, specifically deep networks can be used to reduce maintenance costs. They can speed up inspections by detecting cracks and anomalies from image and video data or by preemptively detecting faults from high-dimensional sensor and simulation data. Nuclear fusion reactors Nuclear fusion reactors are capable of producing safe and carbon-free electricity with the help of virtually limitless hydrogen fuel supply. But, right now they consume more energy that they produce. A lot of scientific and engineering research is still needed to be done before we can use nuclear fusion reactors to facilitate users. ML can be used to accelerate this research by guiding experimental design and monitoring physical processes. As nuclear fusion reactors have a large number of tunable parameters, ML can help prioritize which parameter configurations should be explored during physical experiments. Reducing the current electricity system climate impacts Reducing life-cycle fossil fuel emissions While we work towards bringing low-carbon electricity systems to society, it is important to reduce emissions from the current fossil fuel power generation. ML can be used to prevent the leakage of methane from natural gas pipelines and compressor stations. Earlier, people have used sensor and satellite data to proactively suggest pipeline maintenance or detect existing leaks. ML can be used to improve and scale the existing solutions. Reducing system waste As electricity is supplied to the consumers, some of it gets lost as resistive heat on electricity lines. While we cannot eliminate these losses completely, it can be significantly mitigated to reduce waste and emissions. ML can help prevent avoidable losses through predictive maintenance by suggesting proactive electricity grid upgrades. To know more in detail about how machine learning can help reduce the impact of climate change, check out the report. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action? ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more
Read more
  • 0
  • 0
  • 3026

article-image-rambleed-a-rowhammer-based-side-channel-attack-that-reads-memory-bits-without-accessing-them
Savia Lobo
17 Jun 2019
3 min read
Save for later

RAMBleed: A Rowhammer-based side-channel attack that reads memory bits without accessing them

Savia Lobo
17 Jun 2019
3 min read
A team of academic researchers recently unveiled a new class of Rowhammer-based attack known as RAMBleed. This newly discovered side-channel attack allows attackers to read memory data on a victim’s Windows computer, without actually accessing the memory. This vulnerability listed as CVE-2019-0174 is called RAMBleed as the RAM "bleeds its contents, which we then recover through a side channel," the researchers explained at the RAMBleed page. RAMBleed is used to read data from dynamic random access memory (DRAM) chips. It leverages Rowhammer, a DRAM flaw which is exploited to cause bits in neighboring memory rows to flip their values. In their research paper titled "RAMBleed: Reading Bits in Memory Without Accessing Them", the researchers have shown how an attacker, by observing Rowhammer-induced bit flips in her own memory, can deduce the values in nearby DRAM rows. Thus, researchers say that RAMBleed shifts Rowhammer from being a threat not only to integrity but confidentiality as well. This paper will be presented at the 41st IEEE Symposium on Security and Privacy in May 2020. The researchers also said that they have successfully used RAMBleed to obtain a signing key from an OpenSSH server or rather leaked a 2048-bit RSA key using normal user privileges, enabling information to be taken from targeted devices.  To do so, “we also developed memory massaging methods and a technique called Frame Feng Shui that allows an attacker to place the victim’s secret-containing pages in chosen physical frames.”, the researchers mention in their paper. Source: RAMBleed.com Any system that uses Rowhammer-susceptible DIMMs is vulnerable to RAMBleed. Machines with memory chips “both DDR3 and DDR4 with TRR (targeted row refresh) enabled" are vulnerable. Users can mitigate their risk by upgrading their memory to DDR4 with targeted row refresh (TRR) enabled. Intel revealed a piece of mitigation advice for researchers in an article and further suggested that "Intel Software Guard Extensions (Intel SGX) can be used to protect systems from RAMBleed attacks." Oracle, in their blog post, state that machines running DDR2 and DDR1 memory chips aren't affected. "successfully leveraging RAMBleed exploits require that the malicious attacker be able to locally execute malicious code against the targeted system," Oracle states. No additional security patches are expected for Oracle product distributions, the company said. Red Hat, in an article, state that there are at least three known DRAM fault exploits, "Rowhammer," "Spoiler" and "RAMBleed." Mitigation approach depends on the hardware vendor, according to RedHat: There are a few commonly proposed hardware-based mitigations against Rowhammer that have potential to also mitigate RAMBleed. These are Targeted Row Refresh (TRR), increased DRAM refresh intervals (doubled DRAM refresh rate), and use of ECC memory. The extent to which these strategies may actually mitigate the problem varies and is hardware platform specific. Vendors are anticipated to provide suitable platform-specific guidance. To know more about RAMBleed in detail, visit its official page. Researchers discover a new Rowhammer attack, ‘ECCploit’ that bypasses Error Correcting Code protections Researchers discover Spectre like new speculative flaw, “SPOILER” in Intel CPU’s NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems
Read more
  • 0
  • 0
  • 2693

article-image-facebook-researchers-open-source-ai-habitat-for-embodied-ai-research-and-introduced-replica-a-dataset-of-indoor-space-reconstructions
Amrata Joshi
17 Jun 2019
6 min read
Save for later

Facebook researchers open-source AI Habitat for embodied AI research and introduced Replica, a dataset of indoor space reconstructions

Amrata Joshi
17 Jun 2019
6 min read
Last week, the team at Facebook AI open-sourced AI Habitat which is a new simulation platform for embodied AI research. The AI Habitat is designed to train embodied agents, eg, virtual robots in photo-realistic 3D environments. https://twitter.com/DhruvBatraDB/status/1100791464513040384 The blog post reads, “Our goal in sharing AI Habitat is to provide the most universal simulator to date for embodied research, with an open, modular design that’s both powerful and flexible enough to bring reproducibility and standardized benchmarks to this subfield.” Last week the Facebook AI team also shared Replica, a dataset of reconstructions of various indoor spaces. The 3D reconstructions could be of a staged apartment, retail store, or any indoor spaces. Currently, AI Habitat can run Replica’s state-of-the-art reconstructions and can also work with existing 3D assets created for embodied research including the Gibson and Matterport3D data sets. AI Habitat’s modular software stack involves the principles of compatibility and flexibility. The blog reads, “We incorporated direct feedback from the research community to develop this degree of flexibility, and also pushed the state of the art in training speeds, making the simulator able render environments orders of magnitude faster than previous simulators.” This platform has already been tested and is now available. The Facebook team recently hosted an autonomous navigation challenge that ran on the platform. The winning teams will be awarded the Google Cloud credits at the Habitat Embodied Agents workshop at CVPR 2019. AI Habitat is also the part of Facebook AI’s ongoing effort for creating systems that rely less on large annotated data sets that are used for supervised training. The blog reads, “As more researchers adopt the platform, we can collectively develop embodied AI techniques more quickly, as well as realize the larger benefits of replacing yesterday’s training data sets with active environments that better reflect the world we’re preparing machine assistants to operate in.” The Facebook AI researchers had proposed a paper, Habitat: A Platform for Embodied AI Research in April this year. The paper highlights the set of design requirements the team sought to fulfill. Have a look at a few of the requirements below: Performant rendering engine: The team aimed for a resource efficient rendering engine for producing multiple channels of visual information including RGB (Red, Green, Blue), depth, semantic instance segmentation, surface normals, etc for multiple operating agents. Scene dataset ingestion API: Next, there was a requirement for making the platform agnostic to 3D scene datasets that allow users to use their own datasets. So, the team then aimed for a dataset ingestion API. Agent API: It helps users to specify parameterized embodied agents with well-defined geometry, physics, as well as actuation characteristics. Sensor suite API: It helps in the specification of arbitrary numbers of parameterized sensors including, RGB, depth, contact, GPS, compass sensors that are attached to each agent. AI Habitat features a stack of three layers With AI Habitat, the team aims to retain the simulation-related benefits that past projects demonstrated including speeding experimentations and RL-based training, and further applying them to a widely compatible and realistic platform. AI Habitat features a stack of three modular layers, where each of them can be configured or even replaced to work with different kinds of agents, evaluation protocols, training techniques, and environments. The simulation engine known as the Habitat-Sim forms the base of the stack including built-in support for existing 3D environment data sets, including Gibson, Matterport3D, etc. Habitat-Sim can also be used in abstracting the details of specific data sets and further applying them across simulations. Habitat-API is the second layer in AI Habitat’s software stack which is a high-level library that defines tasks such as visual navigation and question answering. This API incorporates the use of additional data, configurations and further simplifies and standardizes the training as well as evaluation of embodied agents. The third and final layer of this platform where users specify training and evaluation parameters, such as how difficulty might ramp across multiple runs and further ask about what metrics to focus on. According to the researchers, the future of AI Habitat and embodied AI research lies in the simulated environments that are indistinguishable from real life.   Replica data sets by FRL researchers In the case of Replica, the FRL (Facebook Reality Labs) researchers created the data set consisting of scans of 18 scenes that range in size, from an office conference room to a two-floor house. The team also annotated the environments with semantic labels, such as “window” and “stairs,” that included labels for individual objects, such as book or plant. And for creating such a data set, FRL researchers used proprietary camera technology as well as a spatial AI technique that’s based on the simultaneous localization and mapping (SLAM) approaches. Replica further captures the details in the raw video, reconstructing dense 3D meshes along with high-resolution as well as high dynamic range textures. The data used for generating Replica removes any personal details including family photos that could identify an individual. The researchers had to manually fill in the small holes that are inevitably missed during scanning. They also used a 3D paint tool for applying annotations directly onto meshes. The blog reads, “Running Replica’s assets on the AI Habitat platform reveals how versatile active environments are to the research community, not just for embodied AI but also for running experiments related to CV and ML.” Habitat Challenge for the embodied platform The researchers held the Habitat Challenge in April-May this year, a competition that focused on evaluating the task of goal-directed visual navigation. The aim was to demonstrate the utility of AI Habitat’s modular approach as well as emphasis on 3D photo-realism. This challenge required participants to upload the code which was different from the traditional one where usually people upload predictions that are based on a task related to a given benchmark. Also, the code was run on new environments that their agents were not familiar with. The top-performing teams are Team Arnold (a group of researchers from CMU) and Team Mid-Level Vision (a group of researchers from Berkeley and Stanford). The blog further reads, “Though AI Habitat and Replica are already powerful open resources, these releases are part of a larger commitment to research that’s grounded in physical environments. This is work that we’re pursuing through advanced simulations, as well as with robots that learn almost entirely through unsimulated, physical training. Traditional AI training methods have a head start on embodied techniques that’s measured in years, if not decades.” To know more about this news, check out Facebook AI’s blog post. Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification Facebook researchers build a persona-based dialog dataset with 5M personas to train end-to-end dialogue systems Facebook AI researchers investigate how AI agents can develop their own conceptual shared language
Read more
  • 0
  • 0
  • 1898

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 6087
article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 7868

article-image-amazon-is-being-sued-for-recording-childrens-voices-through-alexa-without-consent
Sugandha Lahoti
17 Jun 2019
5 min read
Save for later

Amazon is being sued for recording children’s voices through Alexa without consent

Sugandha Lahoti
17 Jun 2019
5 min read
Last week, two lawsuits were filed in Seattle that allege that Amazon is recording voiceprints of children using its Alexa devices without their consent. This is in violation of laws governing recordings in at least eight states, including Washington. The complaint was filed on behalf of a 10-year-old Massachusetts girl on Tuesday in federal court in Seattle. Another nearly identical suit was filed the same day in California Superior Court in Los Angeles, on behalf of an 8-year-old boy. What was the complaint? Per the complaint, “Alexa routinely records and voiceprints millions of children without their consent or the consent of their parents.” The complaint notes that Alexa devices record and transmit any speech captured after a “wake word” activates the device. This is regardless of the speaker and whether that person purchased the device or installed the associated app. It alleges that Amazon saves a permanent recording of the user’s voice instead of deleting the recordings after storing them for a short time or not at all. In both cases, the children had interacted with Echo Dot speakers in their homes, and in both cases the parents claimed they had never agreed for their child's voice to be recorded. The lawsuit alleges that Amazon’s failure to obtain consent, violates the laws of Florida, Illinois, Michigan, Maryland, Massachusetts, New Hampshire, Pennsylvania and Washington, which require consent of all parties to a recording, regardless of age. Aside from “the unique privacy interest” involved in recording someone’s voice, the lawsuit says, “It takes no great leap of imagination to be concerned that Amazon is developing voiceprints for millions of children that could allow the company (and potentially governments) to track a child’s use of Alexa-enabled devices in multiple locations and match those uses with a vast level of detail about the child’s life, ranging from private questions they have asked Alexa to the products they have used in their home.” What does the lawsuit suggest Amazon should do? The plaintiffs suggest that more could be done to ensure children and others were aware of what was going on. The lawsuit claims that Amazon should inform users who had not previously consented that they were being recorded and ask for consent. It should also deactivate permanent recording for users who had not consented. The complaints also suggest that Alexa devices should be designed to only send a digital query rather than a voice recording to Amazon's servers. Alternatively, Amazon could automatically overwrite the recordings shortly after they have been processed. What is Amazon’s response? When Vox reporters asked Amazon for a comment, they wrote to them in an email, “Amazon has a longstanding commitment to preserving the trust of our customers, and we have strict measures and protocols in place to protect their security and privacy.” They also pointed to a company blog post about the FreeTime parental controls on Alexa. Per their FreeTime parental control policy, parents can review and delete their offspring's voice recordings at any time via an app or the firm's website. In addition, it says, they can contact the firm and request the deletion of their child's voice profile and any personal information associated with it. However, these same requirements do not apply to a child’s use of Alexa outside of the FreeTime service and children’s Alexa skills. Amazon’s Alexa terms of use notes, “if you do not accept the terms of this agreement, then you may not use Alexa.” However, according to Andrew Schapiro, an attorney with Quinn Emanuel Urquhart & Sullivan, one of two law firms representing the plaintiffs. “There is nothing in that agreement that would suggest that “you” means a marital community, family or household. I doubt you could even design terms of service that bind ‘everyone in your household.’” This could also mean that Alexa is storing details of everyone, and not just children. A comment on Hacker News reads, “Important to note that if this allegation is true, it means Alexa is recording everyone and storing it indefinitely, not just children. The lawsuit just says children because children have more privacy protections than adults so it's easier to win a case when children's rights are being violated.” Others also share similar opinions: https://twitter.com/_FamilyInsights/status/1140490515240165377 https://twitter.com/lewiskamb/status/1138895472351883265   However, a few don’t agree: https://twitter.com/shellypalmer/status/1139545654567559169 https://twitter.com/CarolannJacobs/status/1139165270524780554   The suit asks a judge to certify the class action and rule that Amazon violated state laws, require it to delete all recordings of class members, and prevent further recording without prior consent. It seeks damages to be determined at trial. The Seattle case seeks damages up to $100 a day and the California case wants damages of $5,000 per violation. Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias! US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny. Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct an independent review of its human and civil rights impact.
Read more
  • 0
  • 0
  • 2460

article-image-google-announces-early-access-of-game-builder-a-platform-for-building-3d-games-with-zero-coding
Bhagyashree R
17 Jun 2019
3 min read
Save for later

Google announces early access of ‘Game Builder’, a platform for building 3D games with zero coding

Bhagyashree R
17 Jun 2019
3 min read
Last week, a team within Area 120, Google’s workshop for experimental products, introduced an experimental prototype of Game Builder. It is a “game building sandbox” that enables you to build and play 3D games in just a few minutes. It is currently in early access and is available on Steam. https://twitter.com/artofsully/status/1139230946492682240 Here’s how Game Builder makes “building a game feel like playing a game”: Source: Google Following are some of the features that Game Builder comes with: Everything is multiplayer Game Builder’s always-on multiplayer feature allows multiple users to build and play games simultaneously. Your friends can also play the game while you are working on it. Thousands of 3D models from Google Poly You can find thousands of free 3D models (such as rocket ship, synthesizer, ice cream cone) to use in your games from Google Poly. You can also “remix” most of the models using Tilt Brush and Google Blocks application integration to make it fit for your game. Once you find the right 3D model, you can easily and instantly use it in your game. No code, no compilation required This platform is designed for all skill levels, from enabling players to build their first game to providing game developers a faster way to realize their game ideas. Game Builder’s card-based visual programming allows you to bring your game to life with bare minimum knowledge of programming. You just need to drag and drop cards to answer questions like  “How do I move?.” You can also create your own cards with Game Builder’s extensive JavaScript API. It allows you to script almost everything in the game. As the code is live, you just need to save the changes and you are ready to play the game without any compilation. Apart from these features, you can also create levels with terrain blocks, edit the physics of objects, create lighting and particle effects, and more. Once the game is ready you can share your creations on Steam Workshop. Many people are commending this easy way of game building, but also think that this is nothing new. We have seen such platforms in the past, for instance, GameMaker by YoYo Games. “I just had a play with it. It seems very well thought out. It has a very nice tutorial that introduces all the basic concepts. I am looking forward to trying out the multiplayer aspect, as that seems to be the most compelling thing about it,”  a Hacker News user commented. You can read Google’s official announcement for more details. Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 4619
article-image-google-facebook-and-twitter-submit-reports-to-eu-commission-on-progress-to-fight-disinformation
Fatema Patrawala
17 Jun 2019
6 min read
Save for later

Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation

Fatema Patrawala
17 Jun 2019
6 min read
Last Friday the European Commission published a report which detailed the progress made by Facebook, Google and Twitter in March 2019 to fight disinformation. The three online platforms are signatories to the Code of Practice against disinformation and have committed to report monthly on their actions ahead of the European Parliament elections in May 2019. https://twitter.com/jb_bax/status/1139467517007749121 https://twitter.com/jb_bax/status/1139475796425400320 The monthly reporting cycle builds on the Code of Practice, and is part of the Action Plan against disinformation. The European Union adopted this last December to build up capabilities and strengthen cooperation between Member States and EU institutions to proactively address the threats posed by disinformation. The reporting signatories committed to the Code of Practice in October 2018 on a voluntary basis. The Code aims to reach the objectives set out by the Commission's Communication presented in April 2018 by setting a wide range of commitments: Disrupt advertising revenue for accounts and websites misrepresenting information and provide advertisers with adequate safety tools and information about websites purveying disinformation. Enable public disclosure of political advertising and make effort towards disclosing issue-based advertising. Have a clear and publicly available policy on identity and online bots and take measures to close fake accounts. Offer information and tools to help people make informed decisions, and facilitate access to diverse perspectives about topics of public interest, while giving prominence to reliable sources. Provide privacy-compliant access to data to researchers to track and better understand the spread and impact of disinformation. The Commission is monitoring the progress of the platforms towards meeting the commitments that are most relevant and urgent ahead of the election campaign namely: scrutiny of ad placements; political and issue-based advertising; and integrity of services. Vice-President for the Digital Single Market Andrus Ansip, Commissioner for Justice, Consumers and Gender Equality Věra Jourová, Commissioner for the Security Union Julian King, and Commissioner for the Digital Economy and Society Mariya Gabriel progress made a joint statement welcoming the progress made by three companies: "We appreciate the efforts made by Facebook, Google and Twitter to increase transparency ahead of the European elections. We welcome that the three platforms have taken further action to fulfil their commitments under the Code. All of them have started labelling political advertisements on their platforms. In particular, Facebook and Twitter have made political advertisement libraries publicly accessible, while Google's library has entered the testing phase. This provides the public with more transparency around political ads. However, further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation. At the same time, it is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections. We are pleased to see that the collaboration under the Code of Practice has encouraged Facebook, Google and Twitter to take further action to ensure the integrity of their services and fight against malicious bots and fake accounts. In particular, we welcome Google increasing cooperation with fact-checking organisations and networks. Furthermore, all three platforms have been carrying out initiatives to promote media literacy and provide training to journalists and campaign staff. The voluntary actions taken by the platforms are a step forward to support transparent and inclusive elections and better protect our democratic processes from manipulation, but a lot still remains to be done. We look forward to the next reports from April showing further progress ahead of the European elections.” Google reported on specific actions taken to improve scrutiny of ad placements in the EU, including a breakdown per Member State Google gave an update on its election ads policy, which it started enforcing on 21 March 2019, and announced the launch of its EU Elections Ads Transparency Report and its searchable ad library available in April. Google reported that it took action against more than 130,000 EU-based accounts that violated its ads policies to fight misrepresentation, and almost 27,000 that violated policies on original content.   The company also provided data on the removal of a significant number of YouTube channels for violation of its policies on spam, deceptive practices and scams, and impersonation. Google did not report on progress regarding the definition of issue-based advertising. Facebook reported on actions taken against ads that violated its policies for containing low quality, disruptive, misleading or false content Facebook provided information on its political ads policy, which also applies to Instagram. The company noted the launch of a new, publicly available Ad Library globally on 28 March 2019, covering Facebook and Instagram, and highlighted the expansion of access to its Ad Library application programming interface. It reported to take action on over 1.2 million accounts in the EU for violation of policies on ads and content. Facebook reported on 2.2 billion fake accounts disabled globally in Q1 of 2019 and it took down eight coordinated inauthentic behaviour networks, originating in North Macedonia, Kosovo and Russia. The report did not state whether these networks also affected users in the EU. Twitter reported an update to its political campaigning ads policy and provided details on the public disclosure of political ads in Twitter's Ad Transparency Centre Twitter provided figures on actions undertaken against spam and fake accounts, but did not provide further insights on these actions and how they relate to activity in the EU. Twitter reported on rejecting more than 6,000 ads targeted at the EU for violation of its unacceptable business practices ads policy as well as about 10,000 EU-targeted ads for violations of its quality ads policy. Twitter challenged almost 77 million spam or fake accounts. Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area. What are the next steps for the EU Commission The report covers the measures taken by online platforms in March 2019. This will allow the Commission to verify that effective policies to ensure the integrity of the electoral processes are in place before the European elections in May 2019. The Commission will carry out a comprehensive assessment of the Code's initial 12-month period by the end of 2019. If the results prove to be unsatisfactory, the Commission may propose further actions, which may be of a regulatory nature. Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Google and Binomial come together to open-source Basis Universal Texture Format
Read more
  • 0
  • 0
  • 2373

article-image-facebook-signs-on-more-than-a-dozen-backers-for-its-globalcoin-cryptocurrency-including-visa-mastercard-paypal-and-uber
Bhagyashree R
14 Jun 2019
4 min read
Save for later

Facebook signs on more than a dozen backers for its GlobalCoin cryptocurrency including Visa, Mastercard, PayPal and Uber

Bhagyashree R
14 Jun 2019
4 min read
Facebook has secured the backing of some really big companies including Visa, Mastercard, PayPal, and Uber for its cryptocurrency project codenamed Libra (also known as GlobalCoin), as per a WSJ report shared yesterday. Each of these companies will be investing $10 million as part of a governing consortium for the cryptocurrency independent of Facebook. According to WSJ, as a part of the governing body, these companies will be able to monitor Facebook’s payment ambitions. They will also benefit from the popularity of the currency if it takes off with Facebook’s 2.4 million monthly active users. Facebook’ GlobalCoin Despite Facebook being extremely discreet about its cryptocurrency project, many rumors have been floating about this project. The only official statement came from Laura McCracken, Facebook’s Head of Financial Services & Payment Partnerships for Northern Europe. She, in an interview with the German finance magazine Wirtschaftswoche, disclosed that the project’s white paper will be unveiled on June 18th. Other reports by the media suggest that Facebook is targeting 2020 for launching its cryptocurrency. GlobalCoin is going to be a “stablecoin”, which means it will have less price volatility as compared to other cryptocurrencies such as Ethereum and BitCoin. To provide price stability, it will be pegged to a basket of international government-issued currencies, including the U.S. dollar, euro, and Japanese yen. Facebook has spoken with various financial institutions to create a $1 billion basket of multiple international fiat currencies that will serve as a collateral to stabilize the price of the coin. “The value of Facebook Coin will be secured with a basket of fiat currencies,” McCracken told the publication. After its launch, you will be able to use GlobalCoin to make payments via Facebook’s messaging products like Messenger and WhatsApp with zero processing fees. Facebook is in talks with merchants to accept its cryptocurrency as payment and may offer sign-up bonuses. The tech giant is also reportedly looking into developing ATM-like physical terminals for people to convert their money into its cryptocurrency. What do people think about the GlobalCoin cryptocurrency? Despite a few benefits like decentralized governance, less volatility, no interchange fees many users are skeptical about this cryptocurrency, given Facebook’s reputation. Here’s what a user said in a Reddit thread, “Facebook tried and failed with credits a while back. A Facebook coin would have some use cases (sending money across borders for example), but in a day-to-day sense, having to buy credits to use isn't addressing a problem for many people. There's also going to be some people who have no desire to share what they're purchasing/doing with Facebook, especially if there isn't any significant benefit in doing so.” Here’s what some Twitter users think about Facebook’s GlobalCoin: https://twitter.com/SarahJamieLewis/status/1139429913922957312 https://twitter.com/Timccopeland/status/1137311565273862144 However, some users are also supportive of this move. “It is a good idea though because Facebook is now going to start competing with Amazon in e-commerce. Companies aren't going to just buy ads on Facebook, now they're going to directly list their items for sale on the site and consumers can be able to buy those items without ever leaving Facebook. Genius idea. They might give Amazon a run for their money,” another user commented on Reddit. Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan to the UK Parliamentary Committee Austrian Supreme Court rejects Facebook’s bid to stop a GDPR-violation lawsuit against it by privacy activist, Max Schrems Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed?  
Read more
  • 0
  • 0
  • 1938