Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-the-eu-bounty-program-enabled-in-vlc-3-0-7-release-this-version-fixed-the-most-number-of-security-issues
Vincy Davis
11 Jun 2019
2 min read
Save for later

The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issues

Vincy Davis
11 Jun 2019
2 min read
Last week, the President of the VideoLan non-profit organization, Jean-Baptiste Kempf, released the VLC 3.0.7, a minor update of VLC branch 3.0.x. This release is termed as ‘special’ by Kempf, as it has more security issues fixed than any other version of VLC. Kempf has said that “This high number of security issues is due to the sponsoring of a bug bounty program funded by the European Commission, during the FOSSA program.” Last year, the European Commission had announced that they will support Bug Hunting for 14 open source projects it uses. As VLC Media Player was one of the products they used, they were sponsored by EU-FOSSA. In a statement to Bleeping Computers, Kempf has stated that they had “no money”, for having the bug bounty previously. He also added that, the EU-FOSS sponsorship program provided more "manpower" towards funding and fixing security bugs in the VLC 3.0.7. According to the blogpost, VLC Media Player 3.0.7 have fixed 33 valid security issues, with 2 being high security issues, 21 being medium security issues and 10 being low security issues. Out of the two high security issues, one was an out-of-bound write issue, in the the faad2 library, which is a dependency of VLC and the other is a stack buffer overflow, in the RIST Module of VLC 4.0. The medium security issues include mostly out-of-band reads, heap overflows, NULL-dereference and use-after-free security issues. The low security issues are mostly integer overflow, division by zero, and other out-of-band reads. Kempf has also mentioned in the blogpost, that the best hacker via their bug bounty program was ele7enxxh. Bleeping Computers reports that ele7enxxh has addressed total of 13 bugs for $13,265.02. Users are quite happy with this release, due to the huge security fixes and improvements in the VLC 3.0.7 version. https://twitter.com/evanderburg/status/1136600143707246592 https://twitter.com/alorandi/status/1137603867120734208 The VLC users can download the latest version from the VideoLan website. VLC’s updating mechanism still uses HTTP over HTTPS dav1d 0.1.0, the AV1 decoder by VideoLAN, is here NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems
Read more
  • 0
  • 0
  • 2093

article-image-python-3-8-beta-1-is-now-ready-for-you-to-test
Bhagyashree R
11 Jun 2019
2 min read
Save for later

Python 3.8 beta 1 is now ready for you to test

Bhagyashree R
11 Jun 2019
2 min read
Last week, the team behind Python announced the release of Python 3.8.0b1, which is the first out of the four planned beta release previews of Python 3.8. This release marks the beginning of the beta phase where you can test new features and make your applications ready for the new release. https://twitter.com/ThePSF/status/1137797764828553222 These are some of the features that you will see in the upcoming Python 3.8 version: Assignment expressions Assignment expressions were proposed in PEP 572, which was accepted after an extensive discussion among the Python developers. This feature introduces a new operator (:=) with which you will be able to assign variables within an expression. Positional-only arguments In Python, you can pass an argument to a function by position, keyword, or both. API designers may sometimes want to restrict passing the arguments by position only. To easily implement this, Python 3.8 will come with a new marker (/) to indicate that the arguments to its left are positional only. This is similar to * that indicates the arguments to its right are keyword only. Python Initialization Configuration Python is highly configurable, but the configurations are scattered all around the code. This version introduces new functions and structures to the Python Initialization C API to provide Python developers a “straightforward and reliable way” to configure Python. The Vectorcall protocol for CPython The calling convention impacts the flexibility and performance of your code considerably. To optimize the calling of objects, this release introduces Vectorcall protocol and a calling convention that is already being used internally for Python and built-in functions. Runtime audit hooks Python 3.8 will come with two new APIs: Audit Hook and Verified Open Hook to give you insights into a running Python application. These will facilitate both application developers and system administrators to integrate Python into their existing monitoring systems. As this is a beta release, developers should refrain from using it in production environments. The next beta release is currently planned to release on July 1st. To know more about Python 3.8.0b1, check out the official announcement. Which Python framework is best for building RESTful APIs? Django or Flask? PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure Python 3.8 alpha 2 is now available for testing
Read more
  • 0
  • 0
  • 2945

article-image-us-customs-and-border-protection-reveal-data-breach-that-exposed-thousands-of-traveler-photos-and-license-plate-images
Savia Lobo
11 Jun 2019
3 min read
Save for later

US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images

Savia Lobo
11 Jun 2019
3 min read
Yesterday, the U.S. Customs and Border Protection(CBP) revealed a data breach occurrence exposing the photos of travelers and vehicles traveling in and out of the United States. CBP first learned of the attack on May 31 and said that none of the image data had been identified “on the Dark Web or Internet”. According to a CBP spokesperson, one of its subcontractors transferred images of travelers and license plate photos collected by the agency to its internal networks, which were then compromised by the attack. The agency declined to name the subcontractor that was compromised. They also said that its own systems had not been compromised. “A spokesperson for the agency later said the security incident affected “fewer than 100,000 people” through a “few specific lanes at a single land border” over a period of a month and a half”, according to TechCrunch. https://twitter.com/AJVicens/status/1138195795793055744 “No passport or other travel document photographs were compromised and no images of airline passengers from the air entry/exit process were involved,” the spokesperson said. According to The Register’s report released last month, a huge amount of internal files were breached from the firm Perceptics and were being offered for free on the dark web to download. The company’s license plate readers are deployed at various checkpoints along the U.S.-Mexico border. https://twitter.com/josephfcox/status/1138196952812806144 Now, according to the Washington Post, “in the Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: CBP Perceptics Public Statement”. “Perceptics representatives did not immediately respond to requests for comment. CBP spokeswoman Jackie Wren said she was “unable to confirm” if Perceptics was the source of the breach.”, the Washington post further added. In a statement to The Post, Sen. Ron Wyden (D-Ore.) said, “If the government collects sensitive information about Americans, it is responsible for protecting it — and that’s just as true if it contracts with a private company.” “Anyone whose information was compromised should be notified by Customs, and the government needs to explain exactly how it intends to prevent this kind of breach from happening in the future”, he further added. ACLU senior legislative counsel, Neema Singh Guliani said that the breach “further underscores the need to put the brakes” on the government’s facial recognition efforts. “The best way to avoid breaches of sensitive personal data is not to collect and retain such data in the first place,” she said. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them US blacklist China's telecom giant Huawei over threat to national security Privacy Experts discuss GDPR, its impact, and its future on Beth Kindig’s Tech Lightning Rounds Podcast
Read more
  • 0
  • 0
  • 2422

article-image-grapheneos-now-comes-with-new-device-support-for-auditor-app-hardened-malloc-and-a-new-website
Amrata Joshi
11 Jun 2019
4 min read
Save for later

GrapheneOS now comes with new device support for Auditor app, Hardened malloc and a new website

Amrata Joshi
11 Jun 2019
4 min read
GrapheneOS, an open source privacy and security focused mobile OS comes with Android app compatibility. The GrapheneOS releases are supported by the Auditor app as well as attestation service for hardware-based attestation. The GrapheneOS research and engineering project has been in progress for over 5 years. In March, the AndroidHardening project got renamed to GrapheneOS. Two days ago, GrapheneOS released a new website grapheneos.org with additional documentation, tutorials and coverage of topics related to software, firmware and hardware as well as privacy/security features expected in the future. The team has also released a new version PQ3A.190605.003.2019.06.03.18 with device support, Auditor app and Hardened malloc among other fixes. Changes in GrapheneOS project Auditor: update to version 12 The Auditor app has an added support for verifying CalyxOS on the Pixel 2, Pixel 2 XL, Pixel 3 and Pixel 3 XL and even verified boot hash display has been added. Auditor uses hardware security features on supported devices for validating the integrity of the operating system from another Android device. The Auditor app will now also verify that the device is running the stock operating system with the bootloader locked and further will check that no tampering has occurred with the operating system. The list of supported devices for the auditor app include BlackBerry Key2, BQ Aquaris X2 Pro, Google Pixel, 2, Google Pixel 2 XL, Google Pixel 3, Google Pixel 3 XL, Google Pixel 3a, Google Pixel 3a XL, Huawei Honor 7A Pro, Huawei Honor 10, and more. Full list here. https://twitter.com/GrapheneOS/status/1125928692671057920 Hardened malloc Hardened malloc is a security-focused general purpose memory allocator that provides the malloc API along with various extensions. This security-focused design leads to lesser metadata overhead and memory waste from fragmentation than a traditional allocator design. https://twitter.com/GrapheneOS/status/1113556017768325120 It also offers substantial hardening against heap corruption vulnerabilities and aims to provide a decent overall performance focused on long-term performance and memory usage. Hardened malloc currently supports Bionic (Android), musl and glibc and it may also support other non-Linux operating systems in the future. There's custom integration along with other hardening features for which has also been planned for musl in the future. The hardened_malloc for GrapheneOS only is further expanded to workaround for Pixel 3 and Pixel 3 XL camera issues. GrapheneOS now needs to move towards a microkernel-based model with a Linux compatibility layer and it needs to adopt virtualization-based isolation. According to the team, the project will have to move into the hardware space in the long term. Restoration of past features Restoration of past features since the 2019.05.18.20 release include: Exec spawning while using debugging options has been disabled. Exec spawning has been enabled by default. Verizon visual voicemail support has been enabled. Toggle for disabling newly added USB devices has been added to Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Properties for controlling deny_new_usb has been added Implementation of dynamic deny_new_usb toggle mode deny_new_usb feature is set to dynamic by default Many are happy with this latest update. A user commented on HackerNews, “They're making good progress and I can't wait to be able to update my handheld device with mainline pieces for as long as anyone who still uses one cares to update it. Currently my Samsung Android device is at Dec 2018 patchlevel and nothing I can do about it.” Few others are skeptical about this news, another user commented, “There is security, and then there is freedom. You can have the most secure system in the world -- but if there are state sponsored, or company back doors it means nothing.” To know more about this news, check out the official website. AndroidHardening Project renamed to GrapheneOS to reflect progress and expansion of the project GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database      
Read more
  • 0
  • 0
  • 7022

article-image-google-researchers-present-zanzibar-a-global-authorization-system-it-scales-trillions-of-access-control-lists-and-millions-of-authorization-requests-per-second
Amrata Joshi
11 Jun 2019
6 min read
Save for later

Google researchers present Zanzibar, a global authorization system, it scales trillions of access control lists and millions of authorization requests per second

Amrata Joshi
11 Jun 2019
6 min read
Google researchers presented a paper on Google’s consistent global authorization system known as Zanzibar. The paper focuses on the design, implementation, and deployment of Zanzibar for storing and evaluating access control lists (ACL). Zanzibar offers a uniform data model and configuration language for providing a wide range of access control policies from hundreds of client services at Google. The client services include Cloud, Drive, Calendar, Maps, YouTube and Photos. Zanizibar authorization decisions respect causal ordering of user actions and thus provide external consistency amid changes to access control lists and object contents. It scales to trillions of access control lists and millions of authorization requests per second to support services used by billions of people. It has maintained 95th-percentile latency of less than 10 milliseconds and availability of greater than 99.999% over 3 years of production use. Here’s a list of the authors who contributed to the paper, Ruoming Pang, Ramon C ´aceres, Mike Burrows, Zhifeng Chen, Pratik Dave, Nathan Germer, Alexander Golynski, Kevin Graney, Nina Kang, Lea Kissner, Jeffrey L. Korn, Abhishek Parmar, Christopher D. Richards and Mengzhi Wang. What are the goals of Zanzibar system Researchers have certain goals for the Zanzibar system which are as follows: Correctness: The system must ensure consistency of access control decisions. Flexibility: Zanzibar system should also support access control policies for consumer and enterprise applications. Low latency: The system should quickly respond because authorization checks are usually in the critical path of user interactions. And low latency is important for serving search results that often require tens to hundreds of checks. High availability: Zanzibar system should reliably respond to requests Because in the absence of explicit authorization, client services would be forced to deny their user access. Large scale: The system should protect billions of objects that are shared by billions of users. The system should be deployed around the globe so that it becomes easier for its clients and the end users. To achieve the above-mentioned goals, Zanzibar involves a combination of features. For example, for flexibility, the system pairs a simple data model with a powerful configuration language that allows clients to define arbitrary relations between users and objects. The Zanzibar system employs an array of techniques for achieving low latency and high availability and for consistency, it stores the data in normalized forms. Zanzibar replicates ACL data across multiple data centers The Zanzibar system operates at a global scale and stores more than two trillion ACLs (Access Control Lists) and also performs millions of authorization checks per second. But the ACL data does not lend itself to geographic partitioning as the authorization checks for an object can actually come from anywhere in the world. This is the reason why, Zanzibar replicates all of its ACL data in multiple geographically distributed data centers and then also distributes the load across thousands of servers around the world. Zanzibar’s architecture includes a main server organized in clusters Image source:  Zanzibar: Google’s Consistent, Global Authorization System The acl servers are the main server type in this system and they are organized in clusters so that they respond to Check, Read, Expand, and Write requests. When the requests arrive at any server in a cluster, the server passes on the work to other servers in the cluster and those servers may then contact other servers for computing intermediate results. The initial server is the one that gathers the final result and returns it to the client. The Zanzibar system stores the ACLs and their metadata in Spanner databases. There is one database for storing relation tuples for each client namespace and one database for holding all namespace configurations. And there is one changelog database that is shared across all namespaces. So the acl servers basically read and write those databases while responding to client requests. Then there are a specialized server type that respond to Watch requests, they are known as the watchservers. These servers tail the changelog and serve namespace changes to clients in real time. The Zanzibar system runs a data processing pipeline for performing a variety of offline functions across all Zanzibar data in Spanner. For example, producing dumps of the relation tuples in each namespace at a known snapshot time. Zanzibar uses an indexing system for optimizing operations on large and deeply nested sets, known as Leopard. It is responsible for reading periodic snapshots of ACL data and for watching the changes between snapshots. It also performs transformations on data, such as denormalization, and then responds to requests coming from acl servers. The researchers concluded by stating that Zanzibar system is simple, flexible data model and offers configuration language support. According to them, Zanzibar’s external consistency model allows authorization checks to be evaluated at distributed locations without the need for global synchronization. It also offers low latency, scalability, and high availability. People are finding this paper very interesting and also the facts involved are surprising for them. A user commented on HackerNews, “Excellent paper. As someone who has worked with filesystems and ACLs, but never touched Spanner before.” Another user commented, “What's interesting to me here is not the ACL thing, it's how in a way 'straight forward' this all seems to be.” Another comment reads, “I'm surprised by all the numbers they give out: latency, regions, operation counts, even servers. The typical Google paper omits numbers on the Y axis of its most interesting graphs. Or it says "more than a billion", which makes people think "2B", when the actual number might be closer to 10B or even higher.” https://twitter.com/kissgyorgy/status/1137370866453536769 https://twitter.com/markcartertm/status/1137644862277210113 Few others think that the name of the project wasn’t Zanzibar initially and it was called ‘Spice’. https://twitter.com/LeaKissner/status/1136691523104280576 To know more about this system, check out the paper Zanzibar: Google’s Consistent, Global Authorization System. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias    
Read more
  • 0
  • 0
  • 4005

article-image-pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility
Amrata Joshi
11 Jun 2019
3 min read
Save for later

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Amrata Joshi
11 Jun 2019
3 min read
Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learning research reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce. With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves. PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models. For example, one can check out the torchvision, huggingface-bert and gan-model-zoo repositories. Considering the case of torchvision hubconf.py: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don't need separate entry-points. A hubconf.py can help users to send a pull request based on the template mentioned on the GitHub page. The official blog post reads, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.” PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples: Explore available entrypoints: With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as bertTokenizer for preprocessing in the BERT models and making the user workflow smoother. Load a model: With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model. Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, “I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.” Another user commented, “This will also make things easier for people writing algorithms on top of one of the base models.” To know more about this news, check out PyTorch’s blog post. Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet   .
Read more
  • 0
  • 0
  • 2924
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-mozilla-to-bring-a-premium-subscription-service-to-firefox-with-features-like-vpn-and-cloud-storage
Bhagyashree R
11 Jun 2019
3 min read
Save for later

Mozilla to bring a premium subscription service to Firefox with features like VPN and cloud storage

Bhagyashree R
11 Jun 2019
3 min read
Last week, Mozilla, in an interview with German media outlet T3N, revealed its plan of launching paid subscription services in Firefox by October. By subscribing to this service, users will be able to access “premium” features like VPN and secure cloud storage. In the interview, Chris Beard, Mozilla’s CEO did not go much into details about the cost or new premium services and features that we may see in Firefox. However, he did mention two services: VPN and cloud storage. He said, “You can imagine we'll offer a solution that gives us all a certain amount of free VPN Bandwidth and then offer a premium level over a monthly subscription.” He further clarified that no costs will be charged for the currently free service. Mozilla started testing the waters last year by introducing a few paid subscription services. In October, it partnered with ProtonVPN to introduce a paid VPN service. This service was offered to a randomly-selected small group of US users at $10 per month. In February this year, it partnered with Scroll, a news subscription service that allows you to read your favorite news websites by paying a monthly fee. Now, the company is expanding its catalog to offer more subscription services in Firefox. “We want to add more subscription services to our mix and focus more on the relationship with the user to become more resilient in business issues,” said Chris Beard. Explaining the vision behind this paid offering, Dave Camp, senior vice president of Firefox, said in a statement, “A high-performing, free and private-by-default Firefox browser will continue to be central to our core service offerings. We also recognize that there are consumers who want access to premium offerings, and we can serve those users too without compromising the development and reach of the existing products and services that Firefox users know and love.” This news triggered a discussion on Hacker News. Going by the thread, we can say that many users are happy that Mozilla is upfront about this new business model. Several other users also commented about the list of features and services they would want in Firefox before they are convinced enough to pay for the subscription. One of the users commented: “Can confirm, I would pay for a version of Firefox with just four "features": - No Pocket anywhere in the code - No telemetry/experiments/ Normandy anywhere in the code - No network connections to third party hosts (other than websites I'm viewing) - No "discovery" feed / whatever they're calling the activity stream sponsored content thing now anywhere in the code Just let me monthly subscribe via Paypal or whatever, and give me a private build server link and tar.gz of the source.” You can read the entire interview on T3N’s official website. Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features Mozilla’s updated policies will ban extensions with obfuscated code Mozilla puts “people’s privacy first” in its browser with updates to Enhanced Tracking Protection, Firefox Lockwise and Firefox Monitor
Read more
  • 0
  • 0
  • 1689

article-image-android-8-forces-foss-apps-to-use-firebase-for-push-notifications-or-label-them-as-using-too-much-battery
Vincy Davis
11 Jun 2019
6 min read
Save for later

Android 8 forces FOSS apps to use Firebase for push notifications or label them as “using too much battery”

Vincy Davis
11 Jun 2019
6 min read
Recently, Google imposed background limitations on the Android 8.0 (API level 26) for what apps can do while running in the background. Per this new update, Android 8 forces developers to use Firebase for their push notifications, or otherwise tell the user that the app has misbehaved. Push notifications are needed by all messaging apps such as Telegram-FOSS, riot.im, and other FOSS apps The problem here is that the Firebase Android client library is not open source. FOSS apps now cannot keep push notification features in Android 8 while also remaining 100% open source and not being stigmatized as misbehaved.. Google official reason for putting this limitation is to improve the user experience. They state that when many Android apps and services are run simultaneously, it places a load on the system. Further if additional apps or services, run in the background, it places an additional load on the system, which could result in a poor user experience. For example, when a user is playing a game in one window while browsing the web in another window, and using a third app to play music, this could result in abrupt shut down of one of the apps, due to immense load on the system. What are the Background Service limitations? Google has mentioned that under certain circumstances, a background app is placed on a temporary whitelist for several minutes. While an app is on the whitelist, it can launch services without limitation, and its background services are permitted to run. An app is placed on the whitelist when it handles a task that's visible to the user, such as: Handling a high-priority Firebase Cloud Messaging (FCM) message. Receiving a broadcast, such as an SMS/MMS message. Executing a PendingIntent from a notification. Starting a VpnService before the VPN app promotes itself to the foreground. Prior to Android 8.0, the usual way to create a foreground service was to create a background service, then promote that service to the foreground. From Android 8.0, the system will not allow a background app to create a background service. This means that all apps on Android will now be forced to use its use Google’s proprietary service, Firebase for push notifications. Since apps like Telegram-FOSS, riot.im, and other Free and Open source software  apps cannot use the service, these apps are being reported to the user as ‘using too much battery’. Telegram-FOSS team has notified its users The Telegram-FOSS team has now notified its users that since they cant use “Google's push messaging in a FOSS app”, it  will show a notification to users, to keep the background service running, else the users will not be notified about new messages. If the app would set the notification to lower priority (such as hiding it in the lower part of the notification screen), users would immediately get a system notification about Telegram "using battery", which is confusing and is the reason for this not being the default. The Telegram-FOSS team has also claimed that “Despite Google's misleading warnings, there is no difference in battery usage between v4.6 in "true background" and v4.9+ with notification.” This news has received varied reaction from users. Some are being extremely critical about Google. A user on Reddit says that “Google is probably regretting that they made Android open source. They will fight tooth and nail to undo that.” Another user on Hacker News adds, “Google is one of the most evil companies out there for a company that started out with don't be evil. The have some very smart people, some amazing tech, but unfortunately they have some very evil people working for them help bent on maintaining their advantage by any means necessary. Without using Google's push notifications, you are going to end up with something that works about 75% of the time. When this first started happening to me, I lost tons of time thinking it was a bug only to finally realize I needed to use Google's library to get reliability for what once worked.” Some users have pointed out that Apple has been restricting push notifications from a long time allowing apps to use nothing but APNS, run nothing in background or even include GPL source code. Another user comments, “The difference is Apple has been the same from the beginning. There was no bait and switch. People who bought Apple products knew what Apple was and will be and what the terms were. With Google there is a bait and switch. They came to market defining themselves as the open alternative to Apple to get market share and developer interest, and now that they've achieved dominance the terms are changing. There's no surprise that there's going to be massive pushback (and probably antitrust implications)” Another user suggested that it’s better to opt for non-Android phones. https://twitter.com/datenteiler/status/1137743892009406466 Few believe that Google is taking this measure clearly to counter iOS phones in the market. A user on Hacker News says that, “The competition in this case is Apples iOS, for which even HackerNews users love to harp over and over and over again how amazing it is and how little battery it uses because it doesn't allow apps to use anything but APNS, run anything in background or even include GPL source code. This is what's Android competing against - a completely locked down operating system which cannot deliver any kind of GPL code. And every time it allows more freedom to developers it's punished in the market by losing against iOS and mocked on this very website about how it allows their app developers to drain battery and access data. What exactly do you expect Google to do here?” Seeing the backlash, Google may relax its Firebase licensing or change the rules about background apps in the future. For now though, the FOSS apps will have to resort to guiding users to lower the priority of the resulting notification and the battery warning. SENSORID attack: Calibration fingerprinting that can easily trace your iOS and Android phones, study reveals Tor Browser 8.5, the first stable version for Android, is now available on Google Play Store! Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users
Read more
  • 0
  • 0
  • 4225

article-image-nsa-warns-users-of-bluekeep-vulnerability-urges-them-to-update-their-windows-systems
Savia Lobo
10 Jun 2019
3 min read
Save for later

NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems

Savia Lobo
10 Jun 2019
3 min read
Last week, the NSA published an advisory urging Microsoft Windows administrators and users to update their older Windows systems to protect against the BlueKeep vulnerability. This vulnerability was first noted by UK National Cyber Security Centre and reported by Microsoft on 14 May 2019. https://twitter.com/GossiTheDog/status/1128431661266415616 On May 30, Microsoft wrote a security notice to its users to update their systems as "some older versions of Windows" could be vulnerable to cyber-attacks. On May 31, MalwareTech posted a detailed analysis of the BlueKeep vulnerability. “Microsoft has warned that this flaw is potentially “wormable,” meaning it could spread without user interaction across the internet. We have seen devastating computer worms inflict damage on unpatched systems with wide-ranging impact, and are seeking to motivate increased protections against this flaw,” the advisory states. BlueKeep(CVE-2019-0708) is a vulnerability in the Remote Desktop (RDP) protocol. It is present in Windows 7, Windows XP, Server 2003 and 2008, and although Microsoft has issued a patch, potentially millions of machines are still vulnerable. “This is the type of vulnerability that malicious cyber actors frequently exploit through the use of software code that specifically targets the vulnerability”, the advisory explains. NSA is concerned that malicious cyber actors will use the vulnerability in ransomware and exploit kits containing other known exploits, increasing capabilities against other unpatched systems. They have also suggested some additional measures that can be taken: Block TCP Port 3389 at your firewalls, especially any perimeter firewalls exposed to the internet. This port is used in RDP protocol and will block attempts to establish a connection. Enable Network Level Authentication. This security improvement requires attackers to have valid credentials to perform remote code authentication. Disable remote Desktop Services if they are not required. Disabling unused and unneeded services helps reduce exposure to security vulnerabilities overall and is a best practice even without the BlueKeep threat. Why has the NSA urged users and admins to update? Ian Thornton-Trump, head of security at AmTrust International told Forbes, “I suspect that they may have classified information about actor(s) who might target critical infrastructure with this exploit that critical infrastructure is largely made up of the XP, 2K3 family." NSA had also created a very similar EternalBlue exploit which was recently used to hold the city of Baltimore’s computer systems for ransom. The NSA developed the EternalBlue attack software for its own use but lost control of it when it was stolen by hackers in 2017. BlueKeep is similar to EternalBlue that Microsoft compared the two of them in its warning to users about the vulnerability. "It only takes one vulnerable computer connected to the internet to provide a potential gateway into these corporate networks, where advanced malware could spread, infecting computers across the enterprise," Microsoft wrote in its security notice to customers. Microsoft also compared the risks to those of the WannaCry virus, which infected hundreds of thousands of computers around the world in 2017 and caused billions of dollars worth of damage. NSA said patching against BlueKeep is “critical not just for NSA’s protection of national security systems but for all networks.” To know more about this news in detail, head over to Microsoft’s official notice. Approx. 250 public network users affected during Stack Overflow's security attack Over 19 years of ANU(Australian National University) students’ and staff data breached 12,000+ unsecured MongoDB databases deleted by Unistellar attackers
Read more
  • 0
  • 0
  • 2887

article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 2958
article-image-g20-agree-to-bring-in-common-digital-tax-rules-for-tech-giants
Fatema Patrawala
10 Jun 2019
4 min read
Save for later

G20 agree to bring in common digital tax rules for tech giants

Fatema Patrawala
10 Jun 2019
4 min read
According to Reuters, on Saturday, G20, the international forum for the governments and central bank governors from 19 countries and the European Union, agreed to compile common tax rules for global tech giants. This idea is proposed to close loopholes used by tech companies such as Facebook to reduce corporate tax liabilities. Facebook, Google, Amazon, and other large technology firms faced criticism for cutting their tax bills by booking profits in low-tax countries regardless of the location of the end customer.  For example Amazon paid zero dollars in federal taxes for the second year in a row despite doubling its profit, a report from the Institute on Taxation and Economic Policy confirms. Such practices are seen as unfair and the new rules would mean higher tax burdens for large multinational firms. “We welcome the recent progress on addressing the tax challenges arising from digitisation and endorse the ambitious program that consists of a two-pillar approach,” the draft communique said. “We will redouble our efforts for a consensus-based solution with a final report by 2020.” Britain and France are most vocal about the proposals to tax big tech companies. And the two countries are at a disagreement with the United States, which has expressed concern that US Internet companies are being unfairly targeted in a broad push to update the global corporate tax code. “The United States has significant concerns with the two corporate taxes proposed by France and the UK,” US Treasury Secretary Steven Mnuchin said on Saturday at a two-day meeting of G-20 finance ministers in the Japanese city of Fukuoka. Mnuchin spoke at a panel on global taxation at the G-20 after the French and British finance ministers voiced sympathy with his concerns that new tax rules do not discriminate against particular firms. While the Internet companies say they follow tax rules, they have paid little tax in Europe, typically by channelling sales via countries such as Ireland and Luxembourg, which have light-touch tax regimes. The G-20’s debate on changes to the tax code focus on two pillars. The first pillar is dividing up the rights to tax a company where its goods or services are sold even if it does not have a physical presence in that country. If companies are still able to find a way to book profits in low tax or offshore havens, countries could then apply a global minimum tax rate to be agreed under the second pillar. The path to a final agreement is under difficult negotiation because of disagreement on a common definition and how tax distribution will happen among different countries. “There are differences between the United States and the United Kingdom over pillar one. As for pillar two, there are also differences in views within the Group of 7,” said a senior Japanese finance ministry official present at the G-20. According to an official, the G7 is unlikely to issue any communique at a meeting of the world’s leading economic powers next month. Still, several finance ministers at the G-20 said they needed to act quickly to correct unfair corporate tax codes or risk being punished by voters. “We cannot explain to a population that they should pay their taxes when certain companies do not because they shift their profits to low-tax jurisdictions,” French Finance Minister Bruno Le Maire said during the panel discussion. The US government has voiced concern in the past that the European campaign for a “digital tax” unfairly targets US tech giants. After listening to presentations by Bruno Le Maire and British finance minister Philip Hammond, Mnuchin said G-20 countries should issue “marching orders” to their respective finance ministries to negotiate the technical aspects of a deal. There are comments from the community as such that the ministers could not agree or even acknowledge the fact that the climatic problems exist, but agreed on the need for digital tax. https://twitter.com/Ethervoid/status/1138055309853962248 Others have agreed to this discussion and commented that the companies should pay their fair share like everyone else. https://twitter.com/HeczeyAndras/status/1137788562697592833 https://twitter.com/ijrussell/status/1138060033453957121 Privacy Experts discuss GDPR, its impact, and its future on Beth Kindig’s Tech Lightning Rounds Podcast 8 tech companies and 18 governments sign the Christchurch Call to curb online extremism; the US backs off US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny
Read more
  • 0
  • 0
  • 992

article-image-google-research-football-environment-a-reinforcement-learning-environment-for-ai-agents-to-master-football
Amrata Joshi
10 Jun 2019
4 min read
Save for later

Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football

Amrata Joshi
10 Jun 2019
4 min read
Last week, Google researchers announced the release of  Google Research Football Environment, a reinforcement learning environment where agents can master football. This environment comes with a physics-based 3D football simulation where agents control either one or all football players on their team, they learn how to pass between them, and further manage to overcome their opponent’s defense to score goals. The Football Environment offers a game engine, a set of research problems called Football Benchmarks and Football Academy and much more. The researchers have released a beta version of open-source code on Github to facilitate the research. Let’s have a brief look at each of the elements in the Google Research Football Environment. Football engine: The core of the Football Environment Based on the modified version of Gameplay Football, the Football engine simulates a football match including fouls, goals, corner and penalty kicks, and offsides. The engine is programmed in C++,  which allows it to run with GPU as well as without GPU-based rendering enabled. The engine allows learning from different state representations that contain semantic information such as the player’s locations and learning from raw pixels. The engine can be run in both stochastic mode as well as deterministic mode for investigating the impact of randomness. The engine is also compatible with OpenAI Gym API. Read Also: Create your first OpenAI Gym environment [Tutorial] Football Benchmarks: Learning from the actual field game The researchers propose a set of benchmark problems for RL research based on the Football Engine with the help of Football Benchmarks. These benchmarks highlight the goals such as playing a “standard” game of football against a fixed rule-based opponent. The researchers have provided three versions, the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which differ only in the strength of the opponent. They also provide benchmark results for two state-of-the-art reinforcement learning algorithms including DQN and IMPALA that can be run in multiple processes on a single machine or concurrently on many machines. Image Source: Google’s blog post These results indicate that the Football Benchmarks are research problems that vary in difficulties. According to the researchers, the Football Easy Benchmark is suitable for research on single-machine algorithms and Football Hard Benchmark is challenging for massively distributed RL algorithms. Football Academy: Learning from a set of difficult scenarios   Football Academy is a diverse set of scenarios of varying difficulty that allow researchers to look into new research ideas and allow testing of high-level concepts. It also provides a foundation for investigating curriculum learning, research ideas, where agents can learn harder scenarios. The official blog post states, “Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.” Users are giving mixed reaction to this news as some find nothing new in Google Research Football Environment. A user commented on HackerNews, “I guess I don't get it... What does this game have that SC2/Dota doesn't? As far as I can tell, the main goal for reinforcement learning is to make it so that it doesn't take 10k learning sessions to learn what a human can learn in a single session, and to make self-training without guiding scenarios feasible.” Another user commented, “This doesn't seem that impressive: much more complex games run at that frame rate? FIFA games from the 90s don't look much worse and certainly achieved those frame rates on much older hardware.” While a few others think that they can learn a lot from this environment. Another comment reads, “In other words, you can perform different kinds of experiments and learn different things by studying this environment.” Here’s a short YouTube video demonstrating Google Research Football. https://youtu.be/F8DcgFDT9sc To know more about this news, check out Google’s blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias  
Read more
  • 0
  • 0
  • 4307

article-image-google-walkout-organizer-claire-stapleton-resigns-after-facing-retaliation-from-management
Fatema Patrawala
10 Jun 2019
6 min read
Save for later

Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management

Fatema Patrawala
10 Jun 2019
6 min read
Last week one of the Google Walkout organizers, Claire Stapleton resigned from the company after facing continuous retaliation from the management. Last year in November, a global Google Walkout for Real Change was organized by Claire Stapleton, Meredith Whittaker and six other employees at the company. It prompted 20,000 Google employees and contractors in 50 cities to walk off the job on November 1, 2018, to oppose the company’s handling of sexual harassment allegations. Employees had put together a list of six demands for executives to address what they considered as rampant sexism and racism at the company. Google CEO, Sundar Pichai did agree to make one of the demanded policy changes, to make the workplace conditions stable. Google agreed to get rid of forced arbitration for their employees. On Friday, Google Walkout for Real Change group published a letter on Medium which Stapleton had shared internally to her coworkers. The letter explained her reasons for quitting Google. https://twitter.com/clairewaves/status/1137002800053985280 “These past few months have been unbearably stressful and confusing,” the post said. “But they’ve been eye-opening, too: the more I spoke up about what I was experiencing, the more I heard, and the more I understood how universal these issues are. That’s why I find it so depressing that leadership has chosen to just bluntly refute my story. They have a different version of what happened; that’s how this works.” When the news broke of payouts to executives accused of sexual harassment, Stapleton was inspired to call for the walkout. And since then, Stapleton and other Google employees say supervisors have retaliated against them for speaking out. In the month of April this year they shared stories of retaliation they had been facing. https://twitter.com/GoogleWalkout/status/1136997345416101893 According to The Guardian, Stapleton was a marketing manager who spent 12 years at Google and YouTube. She wrote in an email to coworkers announcing her departure “I made the choice after the heads of my department branded me with a kind of scarlet letter that makes it difficult to do my job or find another one. If I stayed, I didn’t just worry that there’d be more public flogging, shunning, and stress, I expected it.” “The message that was sent [to others] was: ‘You’re going to compromise your career if you make the same choices that Claire made,” she told the Guardian by phone. “It was designed to have a chilling effect on employees who raise issues or speak out.” Stapleton said she was demoted and asked to take medical leave, even though she wasn’t sick. Meredith Whittaker, said she was reassigned and told to stop her well-known research on AI ethics in her capacity as co-founder of AI Now Institute. Both women detailed their experiences in an email to coworkers in April, which was then shared with journalists at Wired. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Stapleton wrote. “Only after I hired a lawyer and had her contact Google did management conduct an investigation and walked back my demotion, at least on paper. While my work has been restored, the environment remains hostile and I consider quitting nearly every day.” Stapleton believes the treatment and the alleged attempt to push her out of the company were designed to dissuade other employees from taking similar actions. In the Medium post, Stapleton says she was inspired and motivated by Google’s culture during her initial years, but after 2017, she saw the difference in culture and leadership. “Google’s always had controversies and internal debates, but the ‘hard things’ had intensified, and the way leadership was addressing them suddenly felt different, cagier, less satisfying,” she writes. Google has denied the retaliation allegations, saying that any changes to positions were not retaliatory. Stapleton says in her post that the response from management to her story has been “depressing.” “We thank Claire for her work at Google and wish her all the best,” a Google spokesperson said in a statement. “To reiterate, we don’t tolerate retaliation. Our employee relations team did a thorough investigation of her claims and found no evidence of retaliation. They found that Claire’s management team supported her contributions to our workplace, including awarding her their team Culture Award for her role in the Walkout.” Meredith Whittaker said in a tweet that “Google’s trying to stop a movement. But that’s not how it works — badge or no, Claire isn’t going away, nor are the 1000s organizing across the company.” https://twitter.com/mer__edith/status/1137006840313548801 Stapleton said that despite her decision to leave the company, she was optimistic about the future of worker organizing at Google. “I’ve paid a huge personal cost in a way that is not easy to ask anyone else to do,” she said. “There’s a lot of exhaustion and there’s a lot of fear, but I think that speaking up in whatever way people are comfortable with is having an absolutely tremendous impact.” “It’s not going away,” she said. Stapleton’s departure comes amid considerable turmoil for Google and YouTube, which are facing increased antitrust scrutiny from the US government. Google faces increasing criticism over inconsistent and controversial decisions related to content moderation, and growing activism from employees over issues including the company’s treatment of temps, vendors and contractors(TVCs) and choice of controversial projects to work on including Project Maven and Project Dragonfly. An ex-Googler Vida Vakil who was present when the scandal broke at Google last year weighed in saying the head of HR (Eileen Naughton) defended her handling sexual harassment and paying out millions to the offender at the TGIF meeting by rationalising  that things like that happen because it is in human nature. https://twitter.com/VidaVakil/status/1137046293778317313 Ex-Google employee advocate, Liz Fong-Jones, who quit Google earlier this year on ethical grounds, also tweeted that Google is systematically driving out people who care for the company, which is sad for the company. https://twitter.com/lizthegrey/status/1137009160971796481 US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users
Read more
  • 0
  • 0
  • 1982
article-image-github-introduces-template-repository-for-easy-boilerplate-code-management-and-distribution
Bhagyashree R
10 Jun 2019
2 min read
Save for later

GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution

Bhagyashree R
10 Jun 2019
2 min read
Yesterday GitHub introduced ‘Template repository’ using which you can share boilerplate code and directory structure across projects easily. This is similar to the idea of ‘Boilr’ and ‘Cookiecutter’. https://twitter.com/github/status/1136671651540738048 How to create a GitHub template repository? As per its name, ‘Template repository’ enable developers to mark a repository as a template, which they can use later for creating new repositories containing all of the template repository’s files and folders. You can create a new template repository or mark an existing one as a template with admin permissions. Just navigate to the Settings page and then click on the ‘Template repository’ checkbox. Once the template repository is created anyone who has access to it will be able to generate a new repository with same directory structure and files via ‘Use this template’ button. Source: GitHub All the templates that you own, have access to, or have used in a previous project will also be available to you when creating a new repository through ‘Choose a template’ drop-down. Every template repository will have a new URL ‘/generate’ endpoint that will allow you to distribute your template more efficiently. You just need to link your template users directly to this endpoint. Source: GitHub Templating is similar to cloning a repository, except it does not retain the history of the repository unlike cloning and gives users a clean new project with an initial commit. Though this function is still pretty basic, as GitHub will add more functionality in the future, it will be useful for junior developers and beginners to help them get started. Here’s what a Hacker News user believes we can do with this feature: “This is a part of something which could become a very powerful pattern: community-wide templates which include many best practices in a single commit: - Pre-commit hooks for linting/formatting and unit tests. - Basic CI pipeline configuration with at least build, test and release/deploy phases. - Package installation configuration for the frameworks you want. - Container/VM configuration for the languages you want to enable cross-platform and future-proof development. - Documentation to get started with it all.” Read the official announcement by GitHub for more details. Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack
Read more
  • 0
  • 0
  • 17198

article-image-microsoft-quietly-deleted-10-million-faces-from-ms-celeb-the-worlds-largest-facial-recognition-database
Fatema Patrawala
07 Jun 2019
4 min read
Save for later

Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database

Fatema Patrawala
07 Jun 2019
4 min read
Yesterday the Financial Times reported that Microsoft has quietly deleted its facial recognition database. More than 10 million images that were reportedly being used by companies to test their facial recognition software has been deleted. The database known as MS Celeb, was the largest public facial recognition dataset in the world. The data was amassed by scraping images off the web under a Creative Commons license that allows academic reuse of photos. According to Microsoft Research’s paper on the database, it was originally designed to train tools for image captioning and news video analysis. The existence of this database was revealed by Adam Harvey, a Berlin-based artist and researcher. Harvey’s team investigates the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies. The Financial Times ran an in-depth investigation that revealed that giant tech companies like IBM and Panasonic, and Chinese firms such as SenseTime and Megvii, as well as military researchers, were using the massive database to test their facial recognition software. And now Microsoft has quietly taken MS Celeb down. “The site was intended for academic purposes,” Microsoft told  FT.com, explaining that they had deleted it, because “it was run by an employee that is no longer with Microsoft and has since been removed.” Microsoft itself has used the data set to train facial recognition algorithms, Mr Harvey’s investigation found. The company named the data set “Celeb” to indicate that the faces it had scraped were photos of public figures. But Mr Harvey found that the dataset also included several arguably private individuals, including security journalists such as Kim Zetter, Adrian Chen and Shoshana Zuboff, the author of Surveillance Capitalism, and Julie Brill, the former FTC commissioner responsible for protecting consumer privacy. “Microsoft has exploited the term ‘celebrity’ to include people who merely work online and have a digital identity,” said Mr Harvey. “Many people in the target list are even vocal critics of the very technology Microsoft is using their name and biometric information to build.” Tech experts have also anticipated that Microsoft might have deleted the data due to the violation of the EU’s General Data Protection Law by continuing to distribute the MS Celeb dataset after the regulations came into effect last year. But Microsoft said it was not aware of any GDPR implications and that the site had been retired “because the research challenge is over”. Engadget also reported that after the FT‘s investigation, datasets built by researchers at Duke University and Stanford University were also taken down. According to Fast Company, last year Microsoft’s president, Brad Smith, spoke about fears of such technology that is creeping into everyday life and eroding our civil liberties along the way. It also turned down a facial recognition contract with California law enforcement on human rights grounds. While it may claim it wants regulation for facial recognition, but it may also want to use facial recognition technology to sell items listed on its grocery app Kroger and has eluded privacy-related scrutiny for years. Although the database has been deleted, it is still available to researchers and companies that had previously downloaded it. Once the dataset has been posted online, and people download it, it does exist with them. https://twitter.com/jacksohne/status/1136975380387172355 And now that it is completely free from any licensing, rules or controls which Microsoft previously owned. People are posting it on GitHub, hosting the files on Dropbox and Baidu Cloud, and there is no way from stopping them to continue to post it and use it for their own purposes. https://twitter.com/sedyst/status/1136735995284660224 Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity Microsoft open sources SPTAG algorithm to make Bing smarter! Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users
Read more
  • 0
  • 0
  • 2935