Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-microsoft-teams-rooms-gets-a-new-content-camera-feature-for-whiteboard-presentations
Amrata Joshi
10 Sep 2019
2 min read
Save for later

Microsoft Teams Rooms gets a new content camera feature for whiteboard presentations

Amrata Joshi
10 Sep 2019
2 min read
Last month, the team at Microsoft introduced content camera feature to the Microsoft Teams Rooms useful for meetings. With this feature, users can intelligently include a whiteboard for presentation in their Teams meeting.  https://twitter.com/randychapman/status/1169884205141987332 Microsoft Teams content camera uses Artificial Intelligence to detect, crop and frame the in-room traditional whiteboard and also share its content with the participants (in the meeting). Interestingly, the new feature makes the presenter standing in front of the whiteboard translucent so that remote participants can see the content right through them. https://youtu.be/1XvgH2rNpmk IT administrators can add certified content cameras to their USB ports in the Microsoft Teams Rooms systems. Once the content camera connects to the room, the admin can select the respective camera for input with the Device Settings menu. Currently, Crestron and Logitech cameras are available and certified for use with the Teams content camera functionality. The team at Microsoft has announced that they will be adding more cameras soon. Microsoft partners are also offering unique mounting systems so that users can fit their cameras into any meeting space. The company announced that ceiling tiles and digital signal processor (DSPs) options are also certified for use in the meeting rooms.  Users seem to be excited about this news, a user commented on HackerNews, “I don't see myself using this, but its really cool. The whole "see through presenter" thing is awesome. Somewhat unrelated, but it would be really cool to see that done using AR glasses.” https://twitter.com/AndrewMorpeth/status/1169907577905270784 https://twitter.com/ramsacDan/status/1170595795873292288 To know more about this news, check out the official post. Other interesting news in programming Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift  
Read more
  • 0
  • 0
  • 4603

article-image-amazon-employees-plan-to-walkout-for-climate-change-during-the-sept-20th-global-climate-strike
Fatema Patrawala
10 Sep 2019
5 min read
Save for later

Amazon employees plan to walkout for climate change during the Sept 20th Global Climate Strike

Fatema Patrawala
10 Sep 2019
5 min read
Over the past year, tech workers across the country have walked out to protest a wide range of issues. Google employees objected the way sexual harassment claims were handled. Riot Games workers demonstrated against forced arbitration. And WayFair staff left their desks after learning that the retailer profited from migrant detention centers run by US Immigrations and Customs Enforcement. Now it's Amazon. More than a thousand Amazon employees plan to walk out of work later this month, as part of a global strike for climate change action. Amazon Employees For Climate Justice, a group of Amazon workers trying to push their company to take greater action on climate change, organized an internal petition for the Sept. 20 walkout, the group confirmed in a Medium post yesterday. Both Wired and Vice reported about the planned walkout. Most of the participants are so far from Amazon's Seattle headquarters, with many taking planned vacation days to participate, according to Wired. “Other Amazon employees in our offices around the world are also walking out. Not just Seattle,” added Emily Cunningham, one of the Amazon employees participating in the walkout. “This will be the first time that Amazon workers at corporate offices are walking out, and it’s the first walkout in the tech industry over the climate crisis” the official press release read. Amazon employees walkout will be part of the Global Climate Strike The Amazon employees walkout will be part of "Global Climate Strike," a student-led movement to be held Sept. 20 to 27 sparked by climate activist Greta Thunberg, a 16-year-old from Sweden. The demonstrations are being held during the United Nations Climate Action Summit, on Sept. 23. Amazon Employees For Climate Justice are demanding the company to stop donating to politicians and lobbying groups who deny the existence of climate change, restrict its work with oil and gas companies and cut down its carbon emissions to zero by 2030. https://twitter.com/AMZNforClimate/status/1171077286382243840 Bobby Gordon, an Amazon finance manager in Seattle who joined the climate group a few months ago, said he wanted to take part in the walkout because he and his wife plan to start a family soon. “I'm really worried about the planet that will be there for them,” he said about his future children. “As a future parent, I want to do everything I can to ensure my children have a good life. And so I have to avert the climate crisis any way I can.” He added that Amazon has been receptive to his group's work so far and talked to them about the work that it's already been doing. Amazon spokesperson said to Wired in an email statement that, “Amazon employees receive an allotment of paid time off every year, and they can use this time as they wish.” "Playing a significant role in helping to reduce the sources of human-induced climate change is an important commitment for Amazon," an Amazon spokesperson said. "We have dedicated sustainability teams who have been working for years on initiatives to reduce our environmental impact." Amazon earlier this year announced a new program called Shipment Zero, with a plan to make 50% of all Amazon shipments net zero carbon by 2030. Other tech companies joining the Global Climate Strike The group ‘Microsoft Workers 4 Good’ on Monday said on Twitter that, “they will be joining millions of people around the world by participating in the youth-led Global Climate Strike on September 20th to demand an end to the age of fossil fuels.” https://twitter.com/MsWorkers4/status/1171041815073628162 Google workers announced on Sept.14 that they will be joining Amazon and Microsoft employees, tech workers, and students for the climatic strike on Sept. 20! https://twitter.com/GoogleWAC/status/1172963761440690176 Amazon employee calls for climate change in the past While this walkout is tied to a broader climate strike, it serves as yet another example of Amazon employees speaking up for changes at their company. The Amazon climate group previously this year called for more action from Amazon during its annual shareholder meeting but the shareholders rejected all the 11 resolutions including climate change. Though the resolution ultimately didn’t pass, it helped to raise public awareness and build support among employees inside Amazon.The group also offered support to workers at the Prime Day warehouse strike in Minnesota demanding safe working conditions and fair wages. Other internal groups in Amazon are the Whole Worker group, which includes the Whole Foods employees, pushing the company to improve their working conditions. And ‘We Won't Build It’ employees, includes engineers fighting against Amazon's connections with Palantir and US ICE (Immigration and Customs Enforcement). What’s new in tech this week? Google faces multiple scrutiny from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices The Tor Project on browser fingerprinting and how it is taking a stand against it Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case
Read more
  • 0
  • 0
  • 1470

article-image-faunadb-brings-its-serverless-database-to-netlify-to-help-developers-create-apps
Vincy Davis
10 Sep 2019
3 min read
Save for later

FaunaDB brings its serverless database to Netlify to help developers create apps

Vincy Davis
10 Sep 2019
3 min read
Today, Fauna, announced the integration of its serverless cloud database FaunaDB with Netlify to help developers build and deploy modern stateful serverless apps. As part of this new integration, it will also integrate with Netlify O-Auth and provide users with single sign-on (SSO) access to their database through FaunaDB Cloud console or Shell. The FaunaDB integration with Netlify will increase the productivity of users as data will now be immediately available without any additional provisioning steps. This has been a long withstanding demand from the JAMStack community as users used to find this process inconvenient. The CEO of Fauna, Evan Weaver says, “This integration is significant for developers, who by and large are moving to serverless platforms to build next-generation applications, yet many of them don’t have experience building and provisioning databases. Users also benefit because they can now build an app with a full-featured version of FaunaDB and easily deploy it on the Netlify platform.” Read Also: FaunaDB now offers a “Managed Serverless” service combining Fauna’s serverless database with a managed solution On the other hand, through this end-to-end integration, Netlify users will also be able to create serverless database instances from within the Netlify Platform. They can also log in to the FaunaDB Cloud Console with their Netlify account credentials. Matt Biilmann, the Netlify CEO says, “Now our users can use FaunaDB as a stateful backend for their apps with no additional provisioning. They can also test and iterate within it’s generous free tier, and transparently scale as the project achieves critical mass. The new FaunaDB Add-on is a great enhancement to our platform.” How will users benefit with the FaunaDB add-on for Netlify Users will be able to instantly create a FaunaDB database instance from within the Netlify development environment. Query data via GraphQL or use the Fauna Query Language (FQL) for complex functions. Data can be accessed using relational, document, graph and temporal models. The full range of FaunaDB’s capabilities like built-in authentication, transparent scalability and multi-tenancy is available for users An existing Netlify credentials via O-Auth can be used to directly login to a FaunaDB account. Database instances can be managed by the add-on through FaunaDB Cloud Console and Shell for easy use. Read Also: Fauna announces Jepsen results for FaunaDB 2.5.4 and 2.6.0 Latest news in Data Google open sources their differential privacy library to help protect user’s private data What can you expect at NeurIPS 2019? Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case
Read more
  • 0
  • 0
  • 2832

article-image-kong-announces-kuma-an-open-source-project-to-overcome-the-limitations-of-first-generation-service-mesh-technologies
Amrata Joshi
10 Sep 2019
3 min read
Save for later

Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies

Amrata Joshi
10 Sep 2019
3 min read
Today, the team at Kong, the creators of the API and service lifecycle management platform for modern architectures announced the release of Kuma, a new open-source project.  Kuma is based on the open-source Envoy proxy that addresses limitations of first-generation service mesh technologies by seamlessly managing services on the network. The first-generation meshes didn't have a mature control plane, and later on, when they provided a control plane, it wasn’t easy to use them as they were hard to deploy. Kuma is easy to use and enables rapid adoption of mesh. Also Read: Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Features of Kuma Runs on all the platforms Kuma can run on any platform including Kubernetes, containers, virtual machines, and legacy environments. It also includes a fast data plane as well as an advanced control plane that makes it easier to use.  It is reliable The initial service mesh solutions were not flexible and it was difficult to use them. Kuma ensures reliability by automating the process of securing the underlying network.  Support for all the environments Kuma has support for all the environments in the organization, so the existing applications can still be used in their traditional environments. This provides comprehensive coverage across an organization. Couples a fast data plane using control plane Kuma couples a fast data plane with a control plane that helps users to set permissions, routing rules and expose metrics with just a few commands. Tracing and logging Kuma helps users to implement tracing and logging and analyze metrics for rapid debugging. Routing and Control  Kuma provides traffic control capabilities including circuit breakers and health checks in order to enhance L4 (Layer 4) routing. Marco Palladino, CTO and co-founder of Kong, said, “We now have more microservices talking to each other and connectivity between them is the most unreliable piece: prone to failures, insecure and hard to observe.”  Palladino further added, “It was important for us to make Kuma very easy to get started with on both Kubernetes and VM environments, so developers can start using service mesh immediately even if their organization hasn’t fully moved to Kubernetes yet, providing a smooth path to containerized applications and to Kubernetes itself. We are thrilled to be open-sourcing Kuma and extending the adoption of Envoy, and we will continue to contribute back to the Envoy project like we have done in the past. Just as Kong transformed and modernized API Gateways with open-source Kong, we are now doing that for service mesh with Kuma.” The Kuma platform will be on display during the second annual Kong Summit, which is to be held on October 2-3, 2019. Other interesting news in Cloud and Networking  Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models
Read more
  • 0
  • 0
  • 4074

article-image-oracle-introduces-patch-series-to-add-ebpf-support-for-gcc
Amrata Joshi
10 Sep 2019
4 min read
Save for later

Oracle introduces patch series to add eBPF support for GCC

Amrata Joshi
10 Sep 2019
4 min read
Yesterday, the team at Oracle introduced a patch series that brings a port of GCC to eBPF (extended Berkeley Packet Filter), a virtual machine that is placed in the Linux kernel. With the support for binutils (binary tools), this port can be used for developing compiled eBPF applications. eBPF was initially used for capturing user-level packet and filtering, it is now used to serve as a general-purpose infrastructure for non-networking purposes as well. Since May, Oracle has been planning on introducing an eBPF back-end to GCC 10 to make the GNU compiler target the general-purpose in-kernel virtual machine. Oracle’s inclination on bringing in the eBPF support for GCC is part of the company's efforts towards improving DTrace on Linux. As a compilation target, eBPF is different because of the restrictions imposed by the kernel verifier, and due to the security-driven design of the architecture. Currently, the back end issues an error whenever an eBPF restriction is violated.  This increases the chances of the resulting objects to become acceptable by the kernel verifier, hence shortening the development cycle. How will the patch series support GCC? The first patch in the series updates config.guess and config.sub from the 'config' upstream project to recognize bpf-*-* triplets.  The second one fixes an integrity check in the opt-functions.awk.  The third patch in the series annotates multiple tests in the gcc.c-torture/compile test suite.  While the fourth one introduces a new target flag named as indirect_call and annotates the tests in gcc.c-torture/compile.  The fifth patch in the series adds a new GCC port.  The sixth one adds a libgcc port for eBPF, currently, it addresses the limitations that are imposed by the target, by eliminating a few functions in libgcc2 whose default implementations surpass the eBPF stack limit. While the seventh, eighth and ninth patches are involved in dealing with testing the new port. The gcc.target testsuite has been extended with eBPF-specific tests that cover the backend-specific built-in functions as well as diagnostics.  The tenth one adds documentation updates including information related to the new command-line options and compiler built-ins to the GCC manual. Jose E. Marchesi, software engineer at Oracle writes, “Finally, the last patch adds myself as the maintainer of the BPF port. I personally commit to evolve and maintain the port for as long as necessary, and to find a suitable replacement in case I have to step down for whatever reason.” Other improvements expected in the port Currently, the port supports only a subset of C, in future, the team might add more languages as the eBPF kernel verifier gets smarter. Dynamic stack allocation (alloca and VLAs) is achieved by using a normal general register, %r9, as a pseudo stack pointer. But it has a disadvantage that makes the register "fixed" and therefore not available for general register allocation.   The team is planning to bring more additions to the port that can be used to translate more C, CO-RE capabilities (compile-once, run-everywhere), generation of BTF, etc. The team is working on simulator and GDB support so that it becomes possible to emulate different kernel contexts where eBPF programs execute. Once the support for simulator is achieved, a suitable board description will then be added to DejaGnu, GNU test framework that would run the GCC test suites on it. Now there will be two C compilers that will generate eBPF so the interoperability between programs generated by the compilers will become a major concern for the team. And this task would require communication between the compiler and the kernel communities. Users on HackerNews seem to be excited about this news, a user commented, “This is very exciting! Nice work to the team that's doing this. I've been waiting to dive into eBPF until the tools mature a bit, so it's great to see eBPF support landing in GCC.” To know more about this news, check out the official mail thread. Other interesting news in programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more  
Read more
  • 0
  • 0
  • 2428

article-image-core-python-team-confirms-sunsetting-python-2-on-january-1-2020
Vincy Davis
10 Sep 2019
3 min read
Save for later

Core Python team confirms sunsetting Python 2 on January 1, 2020

Vincy Davis
10 Sep 2019
3 min read
Yesterday, the team behind Python posted details about the sunsetting of Python 2. As announced before, post January 1, 2020, Python 2 will not be maintained by the Python team. This means that it will no longer receive new features and it will not be improved even if a security problem is found in it. https://twitter.com/gvanrossum/status/1170949978036084736 Why is Python 2 retiring? In the detailed post, the Python team explains that the huge alterations needed in Python 2 led to the birth of Python 3 in 2006. To keep users happy, the Python team kept improving and publishing both the versions together. However, due to some changes that Python 2 couldn’t  handle and scarcity of time required to improve Python 3 faster, the Python team has decided to sunset the second version. The team says, “So, in 2008, we announced that we would sunset Python 2 in 2015, and asked people to upgrade before then. Some did, but many did not. So, in 2014, we extended that sunset till 2020.” The Python team has clearly stated that January 1, 2020 onwards, they will not upgrade or improve the second version of Python even if a fatal security problem crops up in it. Their advice to Python 2 users is to switch to Python 3 using the official porting guide as the former will not support many tools in the future. On the other hand, Python 3 supports graph for all the 360 most popular Python packages. Users can also check out the ‘Can I Use Python 3?’ to find out which tools need to upgrade to Python 3. Python 3 adoption has begun As the end date of Python has been decided earlier on, many implementations of Python have already dropped support for Python 2 or are supporting both Python 2 and 3 for now. Two months ago, NumPy, the library for Python programming language officially dropped support for Python 2.7 in its latest version NumPy 1.17.0. It will only support Python versions 3.5 – 3.7. Earlier this year, pandas 0.24 stopped support for Python 2. Pandas maintainer, Jeff Reback had said, “It's 2019 and Python 2 is slowly trickling out of the PyData stack.” However, not all projects are yet fully on board. There has also been efforts taken to keep Python 2 alive. In August this year, PyPy announced that that they do not plan to deprecate Python 2.7 support as long as PyPy exists. https://twitter.com/pypyproject/status/1160209907079176192 Many users are happy to say goodbye to the second version of Python in favor of building towards a long term vision. https://twitter.com/mkennedy/status/1171132063220502528 https://twitter.com/MeskinDaniel/status/1171244860386480129 A user on Hacker News comments, “In 2015, there was no way I could have moved to Python 3. There were too many libraries I depended on that hadn't ported yet. In 2019, I feel pretty confident about using Python 3, having used it exclusively for about 18 months now. For my personal use case at least, this timeline worked out well for me. Hopefully it works out for most everyone. I can't imagine they made this decision without at least some data backing it up.” Head over to the Python website for more details about about this news. Latest news in Python Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”
Read more
  • 0
  • 0
  • 5712
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-google-faces-multiple-scrutiny-from-the-irish-dpc-ftc-and-an-antitrust-probe-by-us-state-attorneys-over-its-data-collection-and-advertising-practices
Savia Lobo
09 Sep 2019
5 min read
Save for later

Google faces multiple scrutiny from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices

Savia Lobo
09 Sep 2019
5 min read
Google has been under scrutiny for its questionable data collection and advertising practices in recent times. Google has been previously hit by three antitrust fines by the EU, with a total antitrust bill amount of around $9.3 billion, till date. Today, more than 40 state attorney generals will launch a separate antitrust investigation targeting Google and its advertising practices. Last week, evidence from an investigation on how Google uses secret web pages to collect user data and expose this information to targeted advertisers were submitted to the Irish Data Protection Commission, who is the main watchdog over Google in the European Union. Also, based on an investigation launched into YouTube by the Federal Trade Commission earlier this year, Google and YouTube have been fined a penalty of $170M to settle allegations that it broke federal law by collecting children's personal information via YouTube Kids. Over 40 State Attorneys General open up antitrust investigations into Google The state watchdogs are initiating antitrust investigations against Silicon Valley’s largest companies, including Google and Facebook, probing whether they undermine rivals and harm consumers, according to The Washington Post. Today, more than 40 attorneys general are expected to launch a separate antitrust investigation targeting Google and its advertising practices subject to the US Supreme Court. Details of this investigation are unknown; however, according to The Wall Street Journal, the attorneys will focus on Google’s impact on digital advertising markets. On Friday, New York’s attorney general, Letitia James also announced that the attorneys general of eight states and the District of Columbia are launching an antitrust investigation into Facebook. https://twitter.com/NewYorkStateAG/status/1169942938023071744 Keith Ellison, attorney general from Minnesota who is signing on to the effort to probe Google, said, “The growth of these [tech] companies has outpaced our ability to regulate them in a way that enhances competition.” We will update this space once the antitrust investigations into Google are initiated. Irish DPC to investigate whether Google secretly feeds users’ data to advertisers An investigation done by Johnny Ryan, chief policy officer for the web browser, Brave, revealed that Google used hidden secret web pages to collect user data and create profiles exposing users personal information to targeted advertisers. In May the DPC opened an investigation into Google's Authorized Buyers real-time bidding (RTB) ad exchange. This exchange connects ad buyers with millions of websites selling their inventory. Ryan filed a GDPR complaint in Ireland over Google's RTB system in 2018, arguing that Google and ad companies expose personal data during RTB bid requests on sites that use Google's behavioral advertising. In his recent evidence, Ryan discovered the secret web pages when he monitored the trading of his personal data on Google’s ad exchange, Authorized Buyers. He found that Google “had labelled him with an identifying tracker that it fed to third-party companies that logged on to a hidden web page. The page showed no content but had a unique address that linked it to Mr Ryan’s browsing activity,” The Financial Times reports. Google allowed the advertisers to combine information about him through hidden "push" pages, which are not visible to web users and could lead to them more easily identifying people online, the Telegraph said. "This constant leaking of personal data, that seems to be happening constantly, needs to be urgently addressed by regulators," Ryan told the Telegraph. He said that “the data compiled by users can then be shared by companies without Google's knowledge, allowing them to more easily build and keep virtual profiles of Google's users without their consent,” the Telegraph further reported. To know about this story, read our detailed coverage of Brave’s findings: “Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case”.  FTC scrutiny leads to Google and YouTube paying $170 million penalty for violating Children’s online privacy In June this year, the Federal Trade Commission (FTC) launched an investigation into YouTube over mishandling children’s private data. The investigation was triggered by complaints from children’s health and privacy groups, which said, YouTube improperly collected data from kids using the video service, thus violating the Children’s Online Privacy Protection Act, a 1998 law known as COPPA that forbids the tracking and targeting of users younger than age 13. Also Read: FTC to investigate YouTube over mishandling children’s data privacy On September 4, the FTC said that YouTube, and its parent company, Google will pay a penalty of $170 million to settle allegations. YouTube said in a statement on Wednesday last week that in four months it would begin treating all data collected from people watching children’s content as if it came from a child. “This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service,” YouTube said on its blog. FTC Chairman Joe Simons said, “No other company in America is subject to these types of requirements and they will impose significant costs on YouTube.” According to Reuters, “FTC’s Bureau of Consumer Protection director Andrew Smith told reporters that the $170 million settlement was based on revenues from data collected, times a multiplier.”  New York Attorney General Letitia James said, “Google and YouTube knowingly and illegally monitored, tracked, and served targeted ads to young children just to keep advertising dollars rolling in.” In a separate statement, Simons and FTC Commissioner Christine Wilson said the settlement will require Google and YouTube to create a system "through which content creators must self-designate if they are child-directed. This obligation exceeds what any third party in the marketplace currently is required to do." To know more about this news in detail, read FTC and New York Attorney General’s statement. Other interesting news Google open sources their differential privacy library to help protect user’s private data What can you expect at NeurIPS 2019? Key Skills every Database Programmer should have
Read more
  • 0
  • 0
  • 1702

article-image-wikipedia-hit-by-massive-ddos-distributed-denial-of-service-attack-goes-offline-in-many-countries
Savia Lobo
09 Sep 2019
3 min read
Save for later

Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries

Savia Lobo
09 Sep 2019
3 min read
Two days ago, on September 7, Wikipedia confirmed with an official statement that it was hit by a malicious attack a day before causing it to go offline in many countries at irregular intervals. The “free online encyclopedia” said the attack was ongoing and the Site Reliability Engineering team is working to curb the attack and restore access to the site. According to downdetector, users across Europe and parts of the Middle East experienced outages shortly before 7pm, BST on September 6. Also Read: Four versions of Wikipedia goes offline in a protest against EU copyright Directive which will affect free speech online The UK was one of the first countries that reported a slow and choppy use of the site. This was followed by reports of the site then being down in several other European countries, including Poland, France, Germany, and Italy. Source: Downdetector.com By Friday evening, 8.30 pm (ET), the attack extended to an almost-total outage in the United States and other countries. During this time, there was no spokesperson available for comment at the Wikimedia Foundation. https://twitter.com/netblocks/status/1170157756579504128 On September 6, at 20:53 (UTC) Wikimedia Germany then informed users by tweeting that a “massive and very” broad DDoS (Distributed Denial of Service) attack on the Wikimedia Foundation servers, making the website impossible to access for many users. https://twitter.com/WikimediaDE/status/1170077481447186432 The official statement on the Wikimedia foundation reads, “We condemn these sorts of attacks. They’re not just about taking Wikipedia offline. Takedown attacks threaten everyone’s fundamental rights to freely access and share information. We in the Wikimedia movement and Foundation are committed to protecting these rights for everyone.” Cybersecurity researcher, Baptiste Robert, with the online name Elliot Anderson wrote on Twitter, “A new skids band is in town. @UKDrillas claimed they are behind the DDOS attack of Wikipedia. You’ll never learn... Bragging on Twitter (or elsewhere) is the best way to get caught. I hope you run fast.” https://twitter.com/fs0c131y/status/1170093562878472194 https://twitter.com/atoonk/status/1170400761722724354 To know about this news in detail, read Wikipedia’s official statement. Other interesting news in Security “Developers need to say no” – Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast] CircleCI reports of a security breach and malicious database in a third-party vendor account Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports
Read more
  • 0
  • 0
  • 2807

article-image-exim-patches-a-major-security-bug-found-in-all-versions-that-left-millions-of-exim-servers-vulnerable-to-security-attacks
Amrata Joshi
09 Sep 2019
3 min read
Save for later

Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks

Amrata Joshi
09 Sep 2019
3 min read
Last week, a vulnerability was found in all the versions of Exim, a mail transfer agent (MTA), that when exploited can let attackers run malicious code with root privileges. According to the Exim team, all Exim servers running version 4.92.1 and the previous ones are vulnerable.  On September 4, the team at Exim published a warning on the Openwall information security mailing list regarding the critical security flaw that was affecting Exim. On Friday, the team at Exim released 4.92.2 to address this vulnerability. This vulnerability with the ID, CVE-2019-15846 was reported in July by a security researcher called Zerons. The vulnerability allows attackers to take advantage of the TLS ServerName Indicator and execute programs with root privileges on servers that accept TLS connections. An attacker can simply create a buffer overflow to gain access to a server running Exim as the bug doesn’t depend on the TLS library that is used by the server, both GnuTLS, as well as OpenSSL, get affected. It is used to serve around 57% of all publicly reachable email servers over the internet. Exim was initially designed for Unix servers, is currently available for Linux and Microsoft Corp. Windows and is also used for the email in cPanel.  Exim's advisory says, "In the default runtime configuration, this is exploitable with crafted ServerName Indication (SNI) data during a TLS negotiation.”  Read Also: A year-old Webmin backdoor revealed at DEF CON 2019 allowed unauthenticated attackers to execute commands with root privileges on servers Server owners can mitigate by disabling TLS support for the Exim server but it would expose email traffic in cleartext and would make it vulnerable to sniffing attacks and interception. Also, this mitigation plan can be more dangerous for the Exim owners living in the EU, since it might lead their companies to data leaks, and the subsequent GDPR fines. Also, Exim installations do not have the TLS support enabled by default but the Exim instances with Linux distros ship with TLS enabled by default.  Exim instances that ship with cPanel also support TLS by default but the cPanel staff have moved towards integrating the Exim patch into a cPanel update that they already started rolling it out to customers. Read Also: A vulnerability found in Jira Server and Data Center allows attackers to remotely execute code on systems A similar vulnerability named as CVE-2019-13917 was found in July that impacted Exim 4.85 up to and including 4.92 and got patched with the release of 4.92.1. Even this vulnerability would allow remote attackers to execute programs with root privileges. In June, the team at Exim had patched CVE-2019-10149, a vulnerability that is called "Return of the Wizard," that allowed attackers to run malicious code with root privileges on remote Exim servers. Also, Microsoft had issued a warning in June regarding a Linux worm that was targeting Azure Linux VMs that were running vulnerable Exim versions. Most of the users are sceptical about the meditation plan as they are not comfortable around disabling the TLS as the mitigation option. A user commented on HackerNews, “No kidding? Turning off TLS isn't an option at many installations. It's gotta work.” Other interesting news in Security  CircleCI reports of a security breach and malicious database in a third-party vendor account Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks    
Read more
  • 0
  • 0
  • 3434

article-image-a-bug-found-in-glibc-limits-modern-simd-instructions-to-only-intel-inhibiting-performance-of-amd-and-other-cpus
Amrata Joshi
09 Sep 2019
4 min read
Save for later

A bug found in Glibc limits modern SIMD instructions to only Intel, inhibiting performance of AMD and other CPUs

Amrata Joshi
09 Sep 2019
4 min read
Yesterday, Mingye Wang reported a bug in the Glibc, GNU C Library. According to him, the dl_platform detection performs "cripple AMD" in the sysdeps in Glibc. The dl_platform check is used for dispatching SIMD (Single instruction, multiple data) libraries. Explaining the bug in detail, Wang writes, that in 2017, Glibc got the capability to transparently load libraries for specific CPU families with some SIMD extensions combinations to benefit the x86 users. However, this implementation limits two "good" sets of modern SIMD instructions to only Intel processors that prevent competitor CPUs with equivalent capabilities to fully perform, something that should not work in any free software package.  He further added that this bug seemed like an implementation of Intel’s ‘cripple AMD’ bug which was reported in 2009, and hence the name. According to the author, Agner Fog, “software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version.” A user commented on HackerNews, “Hm, is this really "crippling" AMD? Seems more like Intel submitted a performance patch that is only enabled for Intel processors, but could be extended to support AMD too. There's a moral difference. It is wrong to intentionally degrade the performance of your competitors. It is not wrong to not do something that benefits others.” Mingye Wang writes, “The crux of the problem lies in the `(cpu_features->kind == arch_kind_intel)` (LHS now renamed cpu_features->basic.kind) comparison that surrounds the entire x86_64 case. Although AMD has not yet made any processors with AVX512, their newer processors (Zen -- Epyc, Ryzen) should at least satisfy the haswell test case.” According to Wang, glibc should remove the dl platform check and the processors should use their feature flags. At 07:15:15 UTC, the page updated that the bug has been resolved and it is a duplicate of 2018, bug 23249, where Epyc and other current AMD CPUs couldn’t select the "haswell" platform subdirectory. This bug was reported by Allan Jensen, who wrote, “Recently a "haswell" sub-arch was introduced to be similar to the old i686 subarch for x86. It is documented as requiring BMI1, BMI2, LZCNT, MOVBE, POPCNT, AVX2 and FMA, but undocumented also checks the CPU is an Intel CPU before using the faster paths. I would suggest glibc fixes that before it becomes public knowledge.” Florian Weimer, author at Red Hat, writes, “We really need feedback from AMD for this change, and it has been difficult for us to talk to engineers there. If you have contacts there, please encourage them to reach out to Red Hat Engineer Partner Management via their own channels (or contact me directly). I agree that this situation is unfortunate, and that AMD customers may not get the best possible performance as the result.” Weimer further added, “The "haswell" platform subdirectory is somewhat ill-defined, see bug 24080. I don't think current AMD CPUs implement the ERMS feature, which Intel assumes is part of the "haswell" definition. This bug has been marked as a duplicate of bug 23249.” Few users are sceptical about this news and think that there might be a planned conspiracy behind this bug. A user commented on HackerNews, “Could this be a legitimate unintended consequence of the pull request or some new dirty pool tactic? Either way I agree with Mingye Wang's assessment, this kind of thing cannot be allowed to get into the source tree. Hopefully AMD will increase their Linux activities with their new bigger market share and income.” To know more about this news, check out the post by Sourceware Bugzilla. Other interesting news in Security  CircleCI reports of a security breach and malicious database in a third-party vendor account Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks  
Read more
  • 0
  • 0
  • 3942
article-image-samsung-develops-key-value-ssd-prototype-paving-the-way-for-optimizing-network-storage-efficiency-and-extending-server-cpu-processing-power
Savia Lobo
06 Sep 2019
4 min read
Save for later

Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power

Savia Lobo
06 Sep 2019
4 min read
Two days ago, Samsung announced a new prototype key-value Solid State Drive (SSD) that is compatible with industry-standard API for key-value storage devices. The Key value SSD prototype moves the storage workload from server CPUs into the SSD without any supportive device. This will simplify software programming and make more effective use of storage resources in IT applications. The new prototype features extensive scalability, improved durability, improved software efficiency, improved system-level performance, and reduced write amplification (WAF). Applications that are based on software-based KV stores will need to handle garbage collection using a method called compaction. However, this affects system performance as both the host CPU and SSD work to clear away the garbage. “By moving these operations to the SSD in a straightforward, standardized manner, KV SSDs will represent a major upgrade in the way that storage is accessed in the future,” the press release states. Garbage collection can be handled entirely in the SSD, freeing the CPU to handle the computational work. Hangu Sohn, Vice President of NAND Product Planning, Samsung Electronics, said in a press release, “Our KV SSD prototype is leading the industry into a new realm of standardized next-generation SSDs, one that we anticipate will go a long way in optimizing the efficiency of network storage and extending the processing power of the server CPUs to which they’re connected.” Also Read: Samsung speeds up on-device AI processing with a 4x lighter and 8x faster algorithm Samsung’s KV SSD prototype is based on a new open standard for a Key-Value Application Programming Interface (KV API) that was recently approved by Storage Networking Industry Association (SNIA). Michael Oros, SNIA Executive Director, said, “The SNIA KV API specification, which provides an industry-wide interface between an application and a Key Value SSD, paves the way for widespread industry adoption of a standardized KV API protocol.”  Hugo Patterson, Co-founder and Chief Scientist at Datrium said, “SNIA’s KV API is enabling a new generation of architectures for shared storage that is high-performance and scalable. Cloud object stores have shown the power of KV for scaling shared storage, but they fall short for data-intensive applications demanding low latency.” “The KV API has the potential to get the server out of the way in becoming the standard-bearer for data-intensive applications, and Samsung’s KV SSD is a groundbreaking step towards this future,” Patterson added. A user on Hacker News writes, “Would be interesting if this evolves into a full filesystem implementation in hardware (they talk about Object Drive but aren't focused on that yet). Some interesting future possibilities: - A cross-platform filesystem that you could read/write from Windows, macOS, Linux, iOS, Android etc. Imagine having a single disk that could boot any computer operating system without having to manage partitions and boot records! - Significantly improved filesystem performance as it's implemented in hardware. - Better guarantees of write flushing (as SSD can include RAM + tiny battery) that translate into higher level filesystem objects. You could say, writeFile(key, data, flush_full, completion) and receive a callback when the file is on disk. All independent of the OS or kernel version you're running on. - Native async support is a huge win Already the performance is looking insane. Would love to get away from the OS dictating filesystem choice and performance.” To know more about this news in detail, read the report on Samsung Key Value SSD. Other interesting news in Hardware Red Hat joins the RISC-V foundation as a Silver level member AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 2380

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 9112

article-image-google-open-sources-their-differential-privacy-library-to-help-protect-users-private-data
Vincy Davis
06 Sep 2019
5 min read
Save for later

Google open sources their differential privacy library to help protect user’s private data

Vincy Davis
06 Sep 2019
5 min read
Yesterday, tending on the importance of strong privacy protections in firms, Google open-sourced a differential privacy library which is used by them in their core products. Their approach is an end-to-end implementation of differentially private query engine and is generic and scalable. Basically, developers can use this library to build tools that can work with aggregate data without revealing personally identifiable information. According to Miguel Guevara, the product manager of privacy and data protection at Google, “Differentially-private data analysis is used by an organization to sort through the majority of their data and safeguard them in such a way that no individual’s data is distinguished or re-identified. This approach can be used for various purposes like focusing on features that can be particularly difficult to execute from scratch.” Google differential privacy library differentiates private aggregations on databases, even when individuals can each be associated with arbitrarily many rows. The company has been using the differential privacy algorithm to create supportive features like “how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi” says Guevara in the official blog post. Google researchers have published their findings in a research paper. The paper describes a  C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information. The researchers have also provided a stochastic tester to check the correctness of the algorithms. One of the researchers explains the motive behind this library on Twitter. He says, “The main focus of the paper is to explain how to protect *users* with differential privacy, as opposed to individual records. So much of the existing literature implicitly assumes that each user is associated to only one record. It's rarely true in practice!” Key features of the differential privacy library Statistical functions: The library can be used by developers to compute Count, Sum, Mean, Variance, Standard deviation, and Order statistics (including min, max, and median). Rigorous testing: The differential privacy library includes a manual and extensible stochastic testing. The stochastic framework produces a database depending on the result of differential privacy. It contains four components such as database generation, search procedure, output generation, and predicate verification. The researchers have open-sourced the ‘Stochastic Differential Privacy Model Checker library’ for reproducibility. Ready to use: The differential privacy library uses the common Structured Query Language (SQL) extension which can capture most data analysis tasks based on aggregations. Modular: The differential privacy library has been extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management. It can also be extended to handle end-to-end user-level differential privacy testing. How does the differentially private SQL work with bounded user contribution The Google researchers have implemented the differential privacy (DP) query engine on a collection of custom SQL aggregation operators and a query rewriter. The SQL engine tracks the user ID metadata to invoke the DP query rewriter and the query rewriter is used to perform anonymization semantics validation and enforcement. The query rewriter then classifies the queries into two steps. The first step validates the table subqueries and the second step samples the fixed number of partially aggregated rows for each user. This step assists in limiting the user contribution across partitions. Finally, the system computes a cross-user DP aggregation which contributes to each GROUP BY partition and limits the user contribution within partitions. The paper states, “Adjusting query semantics is necessary to ensure that, for each partition, the cross-user aggregations receive only one input row per user.” In this way, the developed differentially private SQL system captures most of the data analysis tasks using aggregations. The mechanisms implemented in the system uses a stochastic checker to prevent regression and increase the quality of privacy guaranteed. Though the algorithms presented in the paper are simple, the researchers maintain that based on the empirical evidence the approach is useful, robust and scalable. In the future, researchers are hoping to see usability studies to test the success of the methods. In addition, they see room for significant accuracy improvements, using Gaussian noise and better composition theorems. Many developers have appreciated that Google open-sourcedopen sourced its differential privacy library for others. https://twitter.com/_rickkelly/status/1169605755898515457 https://twitter.com/mattcutts/status/1169753461468086273 In contrast, many people on Hacker News are not impressed with Google’s initiative and feel that they are misleading users with this announcement. One of the comments read, “Fundamentally, Google's initiative on differential privacy is motivated by a desire to not lose data-based ad targeting while trying to hinder the real solution: Blocking data collection entirely and letting their business fail. In a world where Google is now hurting content creators and site owners more than it is helping them, I see no reason to help Google via differential privacy when outright blocking tracking data is a viable solution.” You can check out the differential privacy Github page and the research paper for more information on Google’s research. Latest Google News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Android 10 releases with gesture navigation, dark theme, smart reply, live captioning, privacy improvements and updates to security Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters
Read more
  • 0
  • 0
  • 4123
article-image-apple-music-is-now-available-on-your-web-browser
Sugandha Lahoti
06 Sep 2019
2 min read
Save for later

Apple Music is now available on your web browser

Sugandha Lahoti
06 Sep 2019
2 min read
Yesterday, Apple brought it’s popular Apple Music streaming service to the web. Apple Music for the web is launched in public beta and you can access it from anywhere in the world with your Apple ID, if you are an Apple Music subscriber. This is the first time that Apple Music has been officially offered on the web. You can visit the link beta.music.apple.com to get started. New users will be able to sign up for Apple Music through the website later in the future. Twitterati were shocked to the core as Apple Music came to the web. Appreciation tweets flooded the social media platform. https://twitter.com/viticci/status/1169715776279973889 https://twitter.com/bzamayo/status/1169705640215945218 https://twitter.com/kylewagaman/status/1169878550523940865 Apple Music for web allows you to search for and play any song in the Apple Music catalog. If you have set up the Sync Library option on other devices, you can play tunes from your own library as well. All the main sections from the Apple Music app will also be available, including Library, Search, For You, Browse and Radio. The player has some of the same features as the macOS Catalina Music app, for instance, adapting to a Dark Mode setting. At WWDC, Apple announced that with macOS Catalina, Apple is replacing iTunes with Apple Music. Once the new Music app launches on Mac this fall, it will help Apple move away from supporting iTunes on Windows. A web app is also accessible for people unable to install the iTunes or Apple Music apps. This is another of Apple’s steps in putting more focus on services. Building a web app is a sensible business move that will benefit Apple’s current and future subscribers. Apple Music on web also brings the company on par with Spotify, which is Apple’s biggest competitor in the music sphere. In March this year, Spotify had filed an EU antitrust complaint against Apple. Apple had responded that Spotify’s aim is to make more money off others’ work. More interesting news for Apple Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill? Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability
Read more
  • 0
  • 0
  • 2093

article-image-circleci-reports-of-a-security-breach-and-malicious-database-in-a-third-party-vendor-account
Amrata Joshi
05 Sep 2019
4 min read
Save for later

CircleCI reports of a security breach and malicious database in a third-party vendor account

Amrata Joshi
05 Sep 2019
4 min read
Last week, the team at CircleCI came across with a security breach incident that involved CircleCI and a third-party analytics vendor. An attacker got access to the user data including usernames, email addresses that were associated with GitHub and Bitbucket, user IP addresses as well as user-agent strings from their third-party vendor account.  According to the CircleCI team, information about repository URLs and names, organization name, branch names, and repository owners might have got exposed during this incident. CircleCI user secrets, build artifacts, source code,  build logs, or any other production data wasn’t accessed during this incident. Data regarding the auth tokens, password hashes, credit card or financial information also wasn’t assessed.  The security and the engineering teams at CircleCI revoked the access of the compromised user and further launched an investigation. The official page reads, “CircleCI does not collect social security numbers or credit card information; therefore, it is highly unlikely that this incident would result in identity theft.” How did the security breach occur? The incident took place on 31st August at 2:32 p.m. UTC and it came in the notice when a CircleCI team member saw an email notification about the incident from one of their third-party analytics vendors. And it was then suspected that some unusual activity was taking place in a particular vendor account.  The employee then forwarded the email to their security and engineering teams after which the investigation started and steps were taken in order to control the situation.  According to CircleCI’s engineering team, the added database was not a CircleCI resource. The team then removed the malicious database and the compromised user from the tool and further reached out to the third-party vendor to collaborate on the investigation.  At 2:43 p.m. UTC, the security teams started disabling the improperly accessed account and by 3:00 p.m. UTC, this process ended. According to the team, the customers who accessed the platform between June 30, 2019, and August 31, 2019, could possibly be affected. The page further reads, “In the interest of transparency, we are notifying affected CircleCI users of the incident via email and will provide relevant updates on the FAQ page as they become available.” CircleCI will strengthen its platform’s security The team will continue to collaborate with the third-party vendor so that they can find out the exact vulnerability that caused the incident. The team will review their policies for enforcing 2FA on third-party accounts and continue their transition to single sign-on (SSO) for all of their integrations. This year, the team also doubled the size of their security team. The official post reads, “Our security team is taking steps to further enhance our security practices to protect our customers, and we are looking into engaging a third-party digital forensics firm to assist us in the investigation and further remediation efforts. While the investigation is ongoing, we believe the attacker poses no further risk at this time.” The page further reads, “However, this is no excuse for failing to adequately protect user data, and we would like to apologize to the affected users. We hope that our remediations and internal audits are able to prevent incidents like this and minimize exposures in the future. We know that perfect security is an impossible goal, and while we can’t promise that, we can promise to do better.” Few users on HackerNews discuss how CircleCI has taken user's data and its security for granted by handing it over to the third party.  A user commented on HackerNews, “What's sad about this is that CircleCI actually has a great product and is one of the nicest ways to do end to end automation for mobile development/releases. Having their pipeline in place actually feels quite liberating. The sad part is that they take this for granted and liberate all your data and security weaknesses too to unknown third parties for either a weird ideological reason about interoperability or a small marginal profit.” Few others are appreciating the company’s efforts for resolving the issue. Another user commented, “This is how you handle a security notification. Well done CircleCI, looking forward to the full postmortem.” What’s new in security this week? CircleCI Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Cryptographic key of Facebook’s Free Basics app has been compromised Retadup, a malicious worm infecting 850k Windows machines, self-destructs in a joint effort by Avast and the French police
Read more
  • 0
  • 0
  • 2441