Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-microsoft-releases-cascadia-code-version-1909-16-it-is-the-latest-monospaced-font-for-windows-terminal-and-visual-studio-code
Amrata Joshi
19 Sep 2019
2 min read
Save for later

Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code

Amrata Joshi
19 Sep 2019
2 min read
Yesterday the team at Microsoft released Cascadia Code version 1909.16, the latest monospaced font for command-line applications like Windows Terminal and code editors like Visual Studio Code. This year in May, the team announced about this font at the Microsoft Build conference. Cascadia Code version 1909.16 is now publicly available on GitHub and developers can contribute to the font on GitHub. This code is licensed under the SIL Open Font license on GitHub. Cascadia Code supports programming ligatures that are used while writing codes as they can create new glyphs by combining characters. These ligatures make the code more readable and user-friendly. The name “Cascadia Code” comes from the Windows Terminal project. The codename for Windows Terminal was Cascadia before it was released. https://twitter.com/cinnamon_msft/status/1130864977185632256 The official post reads, “As an homage to the Terminal, we liked the idea of naming the font after its codename. We added Code to the end of the font name to help indicate that this font was intended for programming. Specifically, it helps identify that it includes programming ligatures.” Users can install Cascadia Code font from the GitHub repository’s releases page or receive it in the next update of Windows Terminal. Users are overall excited about this news and they are liking the fact that even the official announcement blog post is using the Cascadia Code font. They are also appreciating the team for adding support for programming ligatures. https://twitter.com/bitbruder/status/1174432721038389253 https://twitter.com/singhkays/status/1174541216261652482 https://twitter.com/FiraCode/status/1174608467442720768 A user commented on HackerNews, “I really like this. Feels easy on the eyes (at least to me). I've used Fira Code for as long as I can remember, but going to give this a go!” Other interesting news in programming DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements    
Read more
  • 0
  • 0
  • 2791

article-image-kubernetes-1-16-releases-with-endpoint-slices-general-availability-of-custom-resources-and-other-enhancements
Vincy Davis
19 Sep 2019
4 min read
Save for later

Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements

Vincy Davis
19 Sep 2019
4 min read
Yesterday, the Kubernetes team announced the availability of Kubernetes 1.16, which consists of 31 enhancements: 8 moving to stable, 8 is beta, and 15 in alpha. This release contains a new feature called Endpoint Slices in alpha to be used as a scalable alternative to Endpoint resources. Kubernetes 1.16 also contains major enhancements like custom resources, overhauled metrics and volume extension. It also brings additional improvements like the general availability of custom resources and more. Extensions like extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs are deprecated in this version. This is Kubernetes' third release this year. The previous version Kubernetes 1.15 released three months ago. It accorded features like extensibility around core Kubernetes APIs and cluster lifecycle stability and usability improvements. Introducing Endpoint Slices in Kubernetes 1.16 The main goal of Endpoint Slices is to increase the scalability for Kubernetes Services. With the existing Endpoints, a single resource had to include all the network endpoints making the corresponding Endpoints resources large and costly. Also, when an Endpoints resource is updated, all the pieces of code watching the Endpoints required a full copy of the resource. This became a tedious process when dealing with a big cluster. With Endpoint Slices, the network endpoints for a Service are split into multiple resources by decreasing the amount of data required for updates. The Endpoint Slices are restricted to 100 endpoints each, by default. The other goal of Endpoint Slices is to provide extensible and useful resources for a variety of implementations. Endpoint Slices will also provide flexibility for address types. The blog post states, “An initial use case for multiple addresses would be to support dual stack endpoints with both IPv4 and IPv6 addresses.”  As the feature is available in alpha only, it is not enabled by default in Kubernetes 1.16. Major enhancements in Kubernetes 1.16 General availability of Custom Resources With Kubernetes 1.16, CustomResourceDefinition (CRDs) is generally available, with apiextensions.k8s.io/v1, as it contains the integration of API evolution in Kubernetes. CRDs were previously available in beta. It is widely used as a Kubernetes extensibility mechanism. In the CRD.v1, the API evolution has a ‘defaulting’ support by default. When defaulting is  combined with the CRD conversion mechanism, it will be possible to build stable APIs over time. The blog post adds, “Updates to the CRD API won’t end here. We have ideas for features like arbitrary subresources, API group migration, and maybe a more efficient serialization protocol, but the changes from here are expected to be optional and complementary in nature to what’s already here in the GA API.” Overhauled metrics In the earlier versions, the global metrics registry was extensively used by the Kubernetes to register exposed metrics. In this latest version, the metrics registry has been implemented, thus making the Kubernetes metrics more stable and transparent. Volume Extension This release contains many enhancements to volumes and volume modifications. The volume resizing support in (Container Storage Interface) CSI specs has moved to beta, allowing the CSI spec volume plugin to be resizable. Additional Windows Enhancements in Kubernetes 1.16 Workload identity option for Windows containers has moved to beta. It can now gain exclusive access to external resources. New alpha support is added for kubeadm which can be used to prepare and add a Windows node to cluster. New plugin support is introduced for CSI in alpha. Interested users can download Kubernetes 1.16 on GitHub. Check out the Kubernetes blog page for more information. Other interesting news in Kubernetes The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed
Read more
  • 0
  • 0
  • 3658

article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 8418

article-image-inkscape-1-0-beta-is-available-for-testing
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

Inkscape 1.0 beta is available for testing

Fatema Patrawala
19 Sep 2019
4 min read
Last week, the team behind Inkscape project released the first beta version of the upcoming and much-awaited Inkscape 1.0. The team writes on the announcement page that, “after releasing two less visible alpha versions this year, in mid-January and mid-June (and one short-lived beta version), Inkscape is now ready for extensive testing and subsequent bug-fixing.” Most notable changes in Inkscape 1.0 New theme selection: In 'Edit > Preferences > User Interface > Theme', users can set a custom GTK3 theme for Inkscape. If the theme comes with a dark variant, activating the 'Use dark theme' checkbox will result in the dark variant being used. Then the new theme will be applied immediately. Origin in top left corner: Another significant change integrated was to set the origin of the document to the top left corner of the page. It coordinates that a user can see in the interface match the ones that are saved in the SVG data, and makes working in Inkscape more comfortable for people who are used to this standard behavior. Canvas rotation and mirroring: With Ctrl+Shift+Scroll wheel the drawing area can be rotated and viewed from different angles. The canvas can be flipped, to ensure that the drawing does not lean to one side, and looks good either way. Canvas alignment: When the option "Enable on-canvas alignment" is active in the "Align and Distribute" dialog, a new set of handles will appear on canvas. These handles can be used to align the selected objects relative to the area of the current selection. HiDPI screen: Inkscape now supports HiDPI screens. Controlling PowerStroke: The width of PowerStroke is controlled with pressure sensitive graphics tablet Fillet/chamfer LPE and (non-destructive) Boolean Operation LPE: This new LPE adds fillet and chamfer to paths. The Boolean Operations LPE finally makes non-destructive boolean operations available in Inkscape. New PNG export options: The export dialog has received several new options available when you expand the 'Advanced' section. Centerline tracing: A new, unified dialog for vectorizing raster graphics is now available from Path > Trace Bitmap. New Live Path Effect selection dialog: Live Path Effects received a major overhaul, with lots of improvements and new features. Faster Path operations and deselection of large number of paths Variable fonts support: If Inkscape is compiled with a Pango library version 1.41.1, then it will come with support for variable fonts. Complete extensions overhaul: Extensions can now have clickable links, images, a better layout with separators and indentation, multiline text fields, file chooser fields and more. Command line syntax changes: The Inkscape command line is now more powerful and flexible for the user and easier to enhance for the developer. Native support for macOS with a signed and notarized .dmg file: Inkscape is now a first-rate native macOS application, and no longer requires XQuartz to operate. Other important changes for users Custom Icon Sets Icon sets no longer consist of a single file containing all icons. Instead each icon is allocated it's own file. The directory structure must follow the standard structure for Gnome icons. As a side effect of a bug fix to the icon preview dialog, custom UI icon SVG files need to be updated to have their background color alpha channel set to 0 so that they display correctly. Third-party extensions Third-party extensions need to be updated to work with this version of Inkscape. Import/Export via UniConvertor dropped Extensions that previously used the UniConvertor library for saving/opening various file formats have been removed: Import formats that have been removed: Adobe Illustrator 8.0 and below (UC) (*.ai) Corel DRAW Compressed Exchange files (UC) (*.ccx) Corel DRAW 7-X4 files (UC) (*.cdr) [cdr imports, but this specific version?] Corel DRAW 7-13 template files (UC) (*.cdt) Computer Graphics Metafile files (UC) (*.cgm) Corel DRAW Presentation Exchange files (UC) (*.cmx) HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Export formats that have been removed: HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Inline LaTeX formula conversion dropped The EQTeXSVG extension that could be used to convert an inline LaTeX equation into SVG paths using Python was dropped, due to its external dependencies. The team has asked to test Inkscape 1.0 beta version and report the findings on Inkscape report page. To know more about this news, check out official Inkscape announcement page. Other interesting news in web development this week! Mozilla announces final four candidates that will replace its IRC network Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Google announces two new attribute links, Sponsored and UGC and updates “nofollow”
Read more
  • 0
  • 0
  • 4429

article-image-media-manipulation-by-deepfakes-and-cheap-fakes-require-both-ai-and-social-fixes-finds-a-data-society-report
Sugandha Lahoti
19 Sep 2019
3 min read
Save for later

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Sugandha Lahoti
19 Sep 2019
3 min read
A new report from Data and Society published by researchers Britt Paris and Joan Donovan argues that the violence of Audio Visual manipulation - namely Deepfakes and Cheap fakes can not be addressed by artificial intelligence alone. It requires a combination of technical and social solutions. What are Deepfakes and cheap fakes One form of Audio Visual manipulation can be executed using experimental machine learning which is deepfakes. Most recently, a terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise went viral on YouTube. Facebook creator Mark Zuckerberg also became the target of the world’s first high profile white hat deepfake operation. This video was created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny where Zuckerberg appears to give a threatening speech about the power of Facebook. Read Also Now there is a Deepfake that can animate your face with just your voice and a picture. Worried about Deepfakes? Check out the new algorithm that manipulates talking-head videos by altering the transcripts. However, fake videos can also be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing. This form of AV manipulation – are cheap fakes. The researchers have coined the term stating they rely on cheap, accessible software, or no software at all. Deepfakes can’t be fixed with Artificial Intelligence alone The researchers argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. They determine that deepfakes need to address structural inequality; groups most vulnerable to that violence should be able to influence public media systems. The authors say, “Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms.” Researchers worry that AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others. Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.” “It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.” This technical fix, the researchers say, must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue, Deepfakes aren’t going to disappear.” The report states, “There should be “social” policy solutions that penalize individuals for harmful behavior. More encompassing solutions should also be formed to enact federal measures on corporations to encourage them to more meaningfully address the fallout from their massive gains.” It concludes, “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.” Other interesting news in tech $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 2802

article-image-emotet-a-dangerous-botnet-spams-malicious-emails-targets-66000-unique-emails-for-more-than-30000-domain-names-reports-bleepingcomputer
Vincy Davis
19 Sep 2019
4 min read
Save for later

Emotet, a dangerous botnet spams malicious emails, “targets 66,000 unique emails for more than 30,000 domain names” reports BleepingComputer

Vincy Davis
19 Sep 2019
4 min read
Three days ago, Emotet, a dangerous malware botnet was found sending malicious emails to many countries around the globe. The maligned email with Emotet's signature was first spotted on the morning of September 18th in countries like Germany, the United Kingdom, Poland, Italy, and the U.S.A. by targeting their individuals, businesses, and government entities. This is not Emotet’s first outing, as it has been found to be used as a banking trojan in 2014. https://twitter.com/MalwareTechBlog/status/1173517787597172741 If any receiver of the infected mail unknowingly downloaded and executed it, they may have exposed themselves to the Emotet malware. Once infected, the computer is then added to the Emotet botnet which uses the particular computer as a downloader for other threats. The Emotet botnet was able to compromise many websites like customernoble.com, taxolabs.com, www.mutlukadinlarakademisi.com, and more. In a statement to BleepingComputer, security researchers from email security corp Cofense Labs said, “Emotet is now targeting almost 66,000 unique emails for more than 30,000 domain names from 385 unique top-level domains (TLDs).” The origin of the malicious emails are suspected to be from “3,362 different senders, whose credentials had been stolen. The count for the total number of unique domains reached 1,875, covering a little over 400 TLDs.” Brad Duncan, a security researcher also reported that some U.S.-based hosts received Trickbot, which is a banking trojan turned malware dropper. Trickbot is a secondary malware infection dropped by Emotet. https://twitter.com/malware_traffic/status/1173694224572792834 What did Emotet botnet do in its last outing? According to BleepingComputer, the Command and control (C2) servers for the Emotet botnet had got active in the beginning of June 2019 but did not send out any instructions to infected machines, until August 22. Presumably, the bot was taking time to rebuild themselves, establish new distribution channels and preparing for new spam campaigns. In short, it was under maintenance. Benkøw, a security researcher had listed a list of stages required for the botnet to respawn a malicious activity. https://twitter.com/benkow_/status/1164899159431946240 Therefore, Emotet’s arrival was not a surprise to many security researchers, as it was expected that the Emotet botnet would revive sooner or later. How does the Emotet botnet function? Discovered in 2014, Emotet was originally designed as a banking trojan to target mostly German and Austrian bank customers by stealing their login credentials. However, over time it has evolved into a versatile and effective malware attack. Once a device is infected, the Emotet botnet tries to penetrate the associated systems via brute-force attacks. This enables Emotnet to perform DDoS attacks or to send out spam emails after obtaining a user’s financial data, browsing history, saved passwords, and Bitcoin wallets. On the other hand, the infected machine comes in contact with Emotet’s Command and Control (C&C) servers to receive updates. It also uses its C&C servers as a junkyard for storing the stolen data. Per Cyren, a single Emotet bot can send a few hundred thousand emails in just one hour, which means that it is capable of sending a few million emails in a day. Emotet delivers modules to extract passwords from local apps, which is then spread sideways to other computers on the same network. It is also capable of stealing the entire email thread to be later reused for spam campaigns. Emotet also provides Malware-as-a-Service (MaaS) to other malware groups to rent access to the Emotet-infected computers. Meanwhile, many people on Twitter are sharing details about Emotet for others to watch out. https://twitter.com/BenAylett/status/1174560327649746944 https://twitter.com/papa_anniekey/status/1173763993325826049 https://twitter.com/evanderburg/status/1174073569254395904 Interested readers can check out the Malware security analysis report for more information. Also, head over to BleepingComputer for more details. Latest news in Security LastPass patched a security vulnerability from the extensions generated on pop-up windows An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18 UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 3456
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-devops-platform-for-coding-gitlab-reached-more-than-double-valuation-of-2-75-billion-than-its-last-funding-and-way-ahead-of-its-ipo-in-2020
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020

Fatema Patrawala
19 Sep 2019
4 min read
Yesterday, GitLab, a San Francisco based start-up, raised $268 million in a Series E funding round valuing the company at $2.75 billion, more than double of its last valuation. In the Series D round funding of $100 million the company was valued at $1.1 billion; and with today’s announcement, the valuation has more than doubled in less than a year. GitLab provides a DevOps platform for developing and collaborating on code and offers a single application for companies to draft, develop and release code. The product is used by companies like Delta Air Lines Inc., Ticketmaster Entertainment Inc. and Goldman Sachs Group Inc etc. The Series E funding round was led by investors including Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments. GitLab plans to go public in November 2020 According to Forbes, GitLab has already set November 18, 2020 as the date for going public. The company seems to be primed and ready for the eventual IPO. As for the $268 million, it gives the company considerable time ahead of the planned event and also gives the flexibility to choose how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” Sid Sijbrandij, Gitlab co-founder and CEO explained in an interview for TechCrunch. He further adds, that the new funds will be used to add monitoring and security to GitLab’s offering, and to increase the company’s staff to more than 1,000 employees this year from 400 employee strength currently. GitLab is able to add workers at a rapid rate, since it has an all-remote workforce. GitLab wants to be independent and chooses transparency for community Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make the transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working. He reports that the community contributes 200 improvements to the GitLab open-source products every month, and that’s double the amount of just a year ago, so the community is still highly active. He did not ignore the fact that Microsoft acquired GitHub last year for $7.5 billion. And GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase. “Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said. Community is happy with GitLab’s products and services Overall the community is happy with this news and GitLab’s products and services. One of the comments on Hacker News reads, “Congrats, GitLab team. Way to build an impressive business. When anybody tells you there are rules to venture capital — like it’s impossible to take on massive incumbents that have network effects — ignore them. The GitLab team is doing something phenomenal here. Enjoy your success! You’ve earned it.” Another user comments, “We’ve been using Gitlab for 4 years now. What got us initially was the free private repos before github had that. We are now a paying customer. Their integrated CICD is amazing. It works perfectly for all our needs and integrates really easily with AWS and GCP. Also their customer service is really damn good. If I ever have an issue, it’s dealt with so fast and with so much detail. Honestly one of the best customer service I’ve experienced. Their product is feature rich, priced right and is easy. I’m amazed at how the operate. Kudos to the team” Other interesting news in programming Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!
Read more
  • 0
  • 0
  • 2347

article-image-lastpass-patched-a-security-vulnerability-from-the-extensions-generated-on-pop-up-windows
Amrata Joshi
18 Sep 2019
3 min read
Save for later

LastPass patched a security vulnerability from the extensions generated on pop-up windows

Amrata Joshi
18 Sep 2019
3 min read
Last week, the team behind LastPass, a password manager website, released an update to patch a security vulnerability that exposes credentials entered by the users on a previously visited site. This vulnerability would let the websites steal credentials for the last account the user had logged into via Chrome or Opera extension. Tavis Ormandy, a security researcher at Google’s Project Zero discovered this bug last month. The security vulnerability appeared on extensions from pop-up windows Google Project Zero’s issue page, Ormandy explained that the flaw rooted from the extensions generated on the popup windows. In some cases, websites could produce a popup by creating an HTML iframe that was linked to the Lastpass popupfilltab.html window instead of calling the do_popupregister() function. In some of the cases, this unexpected method led the popups to open with a password for the most recently visited site.  https://twitter.com/taviso/status/1173401754257375232 According to Ormandy, an attacker can easily hide a malicious link behind a Google Translate URL and make users visit the link, and then extract credentials from a previously visited site. Google’s Project Zero reporting site reads, "Because do_popupregister() is never called, ftd_get_frameparenturl() just uses the last cached value in g_popup_url_by_tabid for the current tab. That means via some clickjacking, you can leak the credentials for the previous site logged in for the current tab." LastPass patched the reported issue in version 4.33.0 that was released on 12th September. According to the official blog post, the bug impacts its Chrome and Opera browser extensions. The bug is considered dangerous as it relies on executing malicious JavaScript code alone without the need for user interaction. Ormandy further added, “I think it’s fair to call this “High” severity, even if it won’t work for *all* URLs.” Ferenc Kun, the security engineering manager for LastPass said in an online statement that this "limited set of circumstances on specific browser extensions" could potentially enable the attack scenario described. Kun further added, "To exploit this bug, a series of actions would need to be taken by a LastPass user including filling a password with the LastPass icon, then visiting a compromised or malicious site and finally being tricked into clicking on the page several times."  LastPass recommends general security practices The team at LastPass shared the following list of general security practices:  Users need to beware of phishing attacks, they shouldn’t click on links from untrusted contacts and companies.  The team advises the users to enable MFA for LastPass and other services like including email, bank, Twitter, Facebook, etc. Additional layers of authentication could prove to be the most effective way to protect the account.  Users shouldn’t reuse or disclose the LastPass master password. Users should use unique passwords for every online account and run antivirus with the latest detection patterns and keeping their software up-to-date.  To know more about this news, check out the official post. Other interesting news in security UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports Lilocked ransomware (Lilu) affects thousands of Linux-based servers          
Read more
  • 0
  • 0
  • 3077

article-image-percona-announces-percona-distribution-for-postgresql-to-support-open-source-databases
Amrata Joshi
18 Sep 2019
3 min read
Save for later

Percona announces Percona Distribution for PostgreSQL to support open source databases 

Amrata Joshi
18 Sep 2019
3 min read
Yesterday, the team at Percona, an open-source database software, and services provider announced Percona Distribution for PostgreSQL to offer expanded support for open source databases. It provides a fully supported distribution of the database and management tools to the organizations so that running applications based on PostgreSQL can deliver higher performance. Based on v11.5 of PostgreSQL, Percona Distribution for PostgreSQL provides support of database for cloud or on-premises deployments. This new database distribution will be unveiled at Percona Live Europe in Amsterdam(30th September- 2nd). Percona Distribution for PostgreSQL includes the following open-source tools to manage database instances and ensure that the data is available, secure, and backed up for recovery: pg_repack, a third-party extension rebuilds PostgreSQL database objects without the need of a table lock. pgaudit is a third-party extension that gives in-depth session and/or object audit logging via the standard logging facility in PostgreSQL. This helps the PostgreSQL users in providing detailed audit logs for compliance and certification purposes. pgBackRest is a backup tool that is responsible for replacing the built-in PostgreSQL backup offering. pgBackRest can scale up for handling large database workloads and can help companies minimize storage requirements by using streaming compression. It uses delta restores to lower the amount of time required to complete a restore. Patroni, a high availability solution for PostgreSQL implementations can be used in production deployments. This list also includes additional extensions that are supported by the PostgreSQL Global Development Group. This new database distribution will provide users with enterprise support, services as well as consulting for their open-source database instances across multiple distributions, across on-premises and cloud deployments. The team further announced that Percona Monitoring and Management will now support PostgreSQL. Peter Zaitsev, co-founder, and CEO of Percona said, “Companies are creating more data than ever, and they have to store and manage this data effectively.” Zaitsev further added, “Open source databases are becoming the platforms of choice for many organizations, and Percona provides the consultancy and support services that these companies rely on to be successful. Adding a distribution of PostgreSQL alongside our current options for MySQL and MongoDB helps our customers leverage the best of open source for their applications as well as get reliable and efficient support.” To know more about Percona Distribution for PostgreSQL, check out the official page. Other interesting news in data Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons
Read more
  • 0
  • 0
  • 3242

article-image-microsoft-open-sources-its-c-standard-library-stl-used-by-msvc-tool-chain-and-visual-studio
Vincy Davis
18 Sep 2019
4 min read
Save for later

Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio

Vincy Davis
18 Sep 2019
4 min read
Last week, Microsoft open-sourced its implementation of the C++ Standard Library, also known as STL. The library is shipped along with MSVC (Microsoft Visual C++ compiler) toolset and the Visual Studio IDE. This announcement was made by MSVC team at the CppCon 2019 conference, two days ago. Users can use the C++ library repo for participating in the STL's development by reporting issues and commenting on pull requests. The MSVC team is still working on migrating the C++ Standard Library to GitHub. Currently, the Github repository contains all of MSVC's product source code including a new CMake build system and a README. The team also plans to use the GitHub issues to track C++20 features, LWG issues, conformance bugs, performance improvements, and other todos. The roadmap and iteration plans of the C++ Standard Library is also under progress. Why Microsoft open-sourced the C++ Standard Library? Microsoft has open-sourced STL to allow it’s users easy access to all the latest developments in C++ by trying out latest changes and improving pull requests by reviewing them. The MSVC team hopes that as C++ standardization accelerates, it will be easier for users to accept the major features. Microsoft chose to open-source STL particularly due to its unique design and fast-evolving nature when compared to other MSVC libraries and compiler. It is also “easy to contribute to, and somewhat loosely coupled, unlike the compiler.” The official blog post adds, “We also want to contribute back to the C++ community by making it possible to take our implementations of major features.” What are the primary goals of the C++ Standard Library? Microsoft is implementing the latest C++ Working Draft, which will eventually become the next C++ International Standard. The goals of the Microsoft C++ Standard Library are to be conformant to spec, extremely fast, usable, and extensive compatibility. Speed being the core strength of C++, STL needs to be extremely fast at runtime. Thus, the MSVC team spends more time on the optimization of the C++ Standard Library than the most general-purpose libraries. They are also working on parts of the programming experience like compiler throughput, diagnostic messages, and debugging checks. They are also keeping VS 2019 binary-compatible with VS 2017 and VS 2015. They consider source compatibility to be important, but not all-important; breaking source compatibility can be an acceptable cost if done for the right reasons in the right way. The blog post states that MSVC’s STL is distributed under the Apache License v2.0 with LLVM Exceptions and is distinct from the libc++ library. However, if any libc++’s maintainers are interested in taking feature implementations from MSVC’s STL or in collaborating on the development of new features in both libraries simultaneously, the MSVC team will help irrespective of the licensing. Users have welcomed Microsoft’s move to open-source it’s C++ Standard Library (STL). A Redditor says, “Thank you! Absolutely amazing. It's been one of my guilty pleasures ever since I started with C++ to prod about in your internals to see how stuff works so this is like being taken to the magical chocolate factory for me.” Another user comments, “thank you for giving back to the open source world. ❤🤘” Interested readers can learn how to build with the Native Tools Command Prompt and a Visual Studio IDE on Github. Latest news in Tech Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment As Kickstarter reels in the aftermath of its alleged union-busting move, is the tech industry at a tipping point? Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements
Read more
  • 0
  • 0
  • 5019
article-image-keras-2-3-0-the-first-release-of-multi-backend-keras-with-tensorflow-2-0-support-is-now-out
Bhagyashree R
18 Sep 2019
4 min read
Save for later

Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out

Bhagyashree R
18 Sep 2019
4 min read
Yesterday, the Keras team announced the release of Keras 2.3.0, which is the first release of multi-backend Keras with TensorFlow 2.0 support. This is also the last major release of multi-backend Keras. It is backward-compatible with TensorFlow 1.14, 1.13, Theano, and CNTK. Keras to focus mainly on tf.keras while continuing support for Theano/CNTK This release comes with a lot of API changes to bring the multi-backend Keras API “in sync” with tf.keras, TensorFlow’s high-level API. However, there are some TensorFlow 2.0 features that are not supported. This is why the team recommends developers to switch their Keras code to tf.keras in TensorFlow 2.0. Read also: TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more Moving to tf.keras will give developers access to features like eager execution, TPU training, and much better integration between low-level TensorFlow and high-level concepts like Layer and Model. Following this release, the team plans to mainly focus on the further development of tf.keras. “Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported,” the team writes. To make it easier for the community to contribute to the development of Keras, the team will be developing tf.keras in its own standalone GitHub repository at keras-team/keras. François Chollet, the creator of Keras, further explained on Twitter why they are moving away from the multi-backend Keras: https://twitter.com/fchollet/status/1174019142774452224 API updates in Keras 2.3.0 Here are some of the API updates in Keras 2.3.0: The add_metric method is added to Layer/Model, which is similar to the add_loss method but for metrics. Keras 2.3.0 introduces several class-based losses including MeanSquaredError, MeanAbsoluteError, BinaryCrossentropy, Hinge, and more. With this update, losses can be parameterized via constructor arguments. Many class-based metrics are added including Accuracy, MeanSquaredError, Hinge, FalsePositives, BinaryAccuracy, and more. This update enables metrics to be stateful and parameterized via constructor arguments. The train_on_batch and test_on_batch methods now have a new argument called resent_metrics. You can set this argument to True for maintaining metric state across different batches when writing lower-level training or evaluation loops. The model.reset_metrics() method is added to Model to clear metric state at the start of an epoch when writing lower-level training or evaluation loops. Breaking changes in Keras 2.3.0 Along with the API changes, Keras 2.3.0 includes a few breaking changes. In this release, batch_size, write_grads, embeddings_freq, and embeddings_layer_names are deprecated and hence are ignored when used with TensorFlow 2.0. Metrics and losses will now be reported under the exact name specified by the user. Also, the default recurrent activation is changed from hard_sigmoid to sigmoid in all RNN layers. Read also: Build your first Reinforcement learning agent in Keras [Tutorial] The release started a discussion on Hacker News where developers appreciated that Keras will mainly focus on the development of tf.keras. A user commented, “Good move. I'd much rather it worked well for one backend then sucked mightily on all of them. Eager mode means that for the first time ever you can _easily_ debug programs using the TensorFlow backend. That will be music to the ears of anyone who's ever tried to debug a complex TF-backed model.” Some also raised the question that Google might acquire Keras in the future considering TensorFlow has already included Keras in its codebase and its creator, François Chollet works as an AI researcher at Google. Check out the official announcement to know what more has landed in Keras 2.3.0. Other news in Data The CAP Theorem in practice: The consistency vs. availability trade-off in distributed databases InfluxData launches new serverless time series cloud database platform, InfluxDB Cloud 2.0 Different types of NoSQL databases and when to use them
Read more
  • 0
  • 0
  • 4736

article-image-open-ai-researchers-advance-multi-agent-competition-by-training-ai-agents-in-a-simple-hide-and-seek-environment
Sugandha Lahoti
18 Sep 2019
5 min read
Save for later

Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment

Sugandha Lahoti
18 Sep 2019
5 min read
Open AI researchers have built a simple hide and seek game environment for multi-agent competition where they observed that AI agents can learn complex strategies and skills on their own as the game progresses. In fact, these AI agents built six distinct strategies and counterstrategies, some of which were not even supported by the training environment. The researchers concluded that such multi-agent co-adaptation may one day produce extremely complex and intelligent behavior. The Hide and Seek training environment AI agents play a team-based hide-and-seek game in a physics-based environment. Hiders (blue) avoid the line of sight from the seekers, and the seekers keep the vision of the hiders. The environment has various objects (walls, ramps, blocks) that agents can grab and also lock in place. There are also randomly generated immovable rooms and walls that the agents must learn to navigate. Before the game, hiders are given a preparation time to run away or change their environment and the seekers are immobilized. Agents are given a team-based reward; hiders are given a reward of +1 if all hiders are hidden and -1 if any hider is seen by a seeker. Seekers are given the opposite reward, -1 if all hiders are hidden and +1 otherwise. There are no explicit incentives for agents to interact with objects in the environment; they are penalized if they go too far outside the play area. Agent characteristics The agents can move by setting a force on themselves in the x and y directions as well as rotate along the z-axis. The agents can see objects in their line of sight and within a frontal cone. The agents can sense the distance to objects, walls, and other agents around them using a lidar-like sensor. The agents can grab and move objects in front of them. The agents can lock objects in place. Only the team that locked an object can unlock it. AI agents develop six distinct strategies Agents are trained using self-play and agent policies are composed of two separate networks with different parameters. This includes a policy network that produces an action distribution and a critic network that predicts the discounted future returns. Policies are optimized using Proximal Policy Optimization (PPO) and Generalized Advantage Estimation (GAE); training is performed using OpenAI’s rapid, it’s general-purpose RL training system. The researchers noticed that as agents train against each other in hide-and-seek, six distinct strategies emerge. Initially, hiders and seekers learn to crudely run away and chase. After approximately 25 million episodes of hide-and-seek, the hiders learn to use the tools at their disposal and intentionally modify their environment. After another 75 million episodes, the seekers also learn rudimentary tool use; they learn to move and use ramps to jump over obstacles, etc. 10 million episodes later, the hiders learn to defend against this strategy; the hiders learn to bring the ramps to the edge of the play area and lock them in place, seemingly removing the only tool the seekers have at their disposal. After 380 million total episodes of training, the seekers learn to bring a box to the edge of the play area where the hiders have locked the ramps. The seekers then jump on top of the box and surf it to the hiders’ shelter In response, the hiders learn to lock all of the boxes in place before building their shelter. https://youtu.be/kopoLzvh5jY They also found some surprising behavior by these AI agents. Box surfing: Since agents move by applying forces to themselves, they can grab a box while on top of it and “surf” it to the hider’s location. Endless running: Without adding explicit negative rewards for agents leaving the play area, in rare cases, hiders will learn to take a box and endlessly run with it. Ramp exploitation (hiders): Hiders abuse contact physics and remove ramps from the play area. Ramp exploitation (seekers): Seekers learn that if they run at a wall with a ramp at the right angle, they can launch themselves upward. The researchers concluded that complex human-relevant strategies and skills can emerge from multi-agent competition and standard reinforcement learning algorithms at scale. They state, “our results with hide-and-seek should be viewed as a proof of concept showing that multi-agent auto-curricula can lead to physically grounded and human-relevant behavior.” This research was well appreciated by readers. Many people took to Hacker News to congratulate the researchers. Here are a few comments. “Amazing. Very cool to see this sort of multi-agent emergent behavior. Along with the videos, I can't help but get a very 'Portal' vibe from it all. "Thank you for helping us help you help us all." “This is incredible. The various emergent behaviors are fascinating. It seems that OpenAI has a great little game simulated for their agents to play in. The next step to make this even cooler would be to use physical, robotic agents learning to overcome challenges in real meatspace!” “I'm completely amazed by that. The hint of a simulated world seems so matrix-like as well, imagine some intelligent thing evolving out of that. Wow.” Read the research paper for a deeper analysis. The code is available on GitHub. More news in Artificial Intelligence Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe
Read more
  • 0
  • 0
  • 3142

article-image-linux-5-3-releases-with-support-for-amd-navi-gpus-zhaoxin-x86-cpus-and-power-usage-improvements
Vincy Davis
18 Sep 2019
4 min read
Save for later

Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements

Vincy Davis
18 Sep 2019
4 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.3 on the Linux Kernel Mailing List (lkml). This major release brings new support for AMD Navi GPUs, the umwait x86 instructions, and Intel speed select. Linux 5.3 also presents a new pidfd_open(2) system call and 16 millions new IPv4 addresses in the 0.0.0.0/8 range. There are also many new drivers and improvements in this release. The previous version, Linux 5.2 was released more than two months ago. It included Sound Open Firmware project, new mount API, improved pressure stall information and more. What’s new in Linux 5.3? pidfd_open(2) system call The PID (process identification number) issue has been present in Linux, for a long time. The Linux 5.1 release had the pidfd_send_signal which allowed processes to send signals to stable ‘pidfd’ handles, even after PID reuse. Linux 5.2 added the CLONE_PIDFD to clone(2) feature which enabled users to create PIDs that were usable with pidfd_send_signal(2). However, this created problems for Android's low memory killer (LMK). Thus, Linux 5.3 has a new pidfd_open(2) syscal to complete the functionality needed to deal with the PID reuse issue. This release also has an added polling support for pidfd to allow process managers to identify when a process dies in a race-free way. Support for AMD Navi GPUs Linux 5.3 provides initial support for the AMD Navi GPUs in the amdgpu driver. The AMD Navi GPUs are the new AMD RX5700 GPUs which became available recently. This release also adds support for the core driver,(DCN2) displays, GFX and compute (GFX10), System DMA (SDMA 5), multimedia decode and encode (VCN2) and power management. Zhaoxin x86 CPU support This release also supports the Zhaoxin x86 Processors. The report states, “The architecture of the ZX family of processors is a continuation of VIA's Centaur Technology x86-64 Isaiah design.” Intel Speed Select support for easier power tuning Linux 5.3 also adds support for Intel Speed Select, which is a feature only supported on specific Xeon servers. The power management technology allows users to configure their servers for throughput and per-core performance settings. The Intel Speed Select enables prioritization of performance for certain workloads running on specific cores. 16 millions of new IPv4 addresses This release makes the 0.0.0.0/8 IPv4 range acceptable by Linux as a valid address range and available for 16 million new IPv4 addresses. The IPv4 address space includes hundreds of millions of addresses which were previously reserved for future use. The new IPv4 Cleanup Project has made the addresses usable now. Utilization clamping support in the task scheduler This release adds utilization clamping support to the task scheduler. This is a refinement of the energy-aware scheduling framework for power-asymmetric systems (like ARM big.LITTLE) added in Linux 5.0. Per-task clamping attributes can be set through sched_setattr(2). This feature intends to replace the hacks that Android had developed to achieve the same result. Improvements in Core Io_uring Added support for recvmsg() Added support for sendmsg() Added support for Submission Queue Entry links. Task scheduler New tracepoints added which will be required for energy-aware scheduling testing CONFIG_PREEMPT_RT It will help the RT patchset to be fully integrated into the mainline kernel in the future merge Improvements in Memory management Smaps: It is used to report separate components for the PSS in smaps_rollup proc file. This will help in tuning the memory manager behavior in consumer devices, particularly for the mobile devices commit. Swap: It uses rbtree for swap_extent instead of a linked list. Thus, it improves swap performance when there are lots of processes accessing the swap device concurrently. Linux developers are happy with the Linux 5.3 features, especially the new support for AMD Navi GPUs. https://twitter.com/NoraDotCodes/status/1173621317033218049 A Redditor comments, “I'm really glad to hear that Linux is catching up to the navi gpus as I just invested in all that and after building a new box in attempting to do GPU pass-through for a straight up Linux host and windows VM realized that things aren't quite there yet.” Another user says, “Looks like some people were eagerly waiting for this release. I'm glad the Linux kernel keeps evolving and improving.” These are some of the selected updates in Linux 5.3. You may go through the release notes for more details. Latest news in Linux A recap of the Linux Plumbers Conference 2019 Lilocked ransomware (Lilu) affects thousands of Linux-based servers IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 3082
article-image-nvim-v0-4-0-releases-with-new-api-functions-lua-library-ui-events-and-more
Amrata Joshi
17 Sep 2019
2 min read
Save for later

NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!

Amrata Joshi
17 Sep 2019
2 min read
Last Sunday, the team behind Neovim, a project that refactors Vim source code released NVIM v0.4.0. This release received approximately 2700 commits since v0.3.4, which was a non-maintenance release. It comes with improvements to documentation, test/CI infrastructure, internal subsystems and 700+ patches that are merged from Vim. What’s new in NVIM v0.4.0? API functions This release comes with a new function, nvim_create_buf that is used for creating various types of buffers including nvim_get_context and nvim_load_context. The nvim_input_mouse function is used for performing mouse actions. Users can create floating windows with nvim_open_win. UI events The new UI events including redraw.grid_destroy, redraw.hl_group_set, redraw.msg_clear, and much more are included. Lua library NVIM v0.4.0 introduces "Nvim-Lua standard library" that comes with string functions and generates documentation from docstrings. Multigrid windows It now features windows that are isolated internally and can be drawn on separate grids. These windows are sent as distinct objects to UIs so that UIs can control the layout.   Support for sign columns It comes with support for multiple auto-adjusted sign columns, so users will  be shown extra columns to automatically accommodate all the existing signs. Major changes It has improved Lua error messages and fixed menu_get(). In NVIM v0.4.0, jemalloc, general purpose malloc implementation has been removed. In this release, the 'scrollback' option is more consistent and future-proof.  To know more about this news, check out the release notes. Other interesting news in programming A recap of the Linux Plumbers Conference 2019 GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 2109

article-image-an-unsecured-elasticsearch-database-exposes-personal-information-of-20-million-ecuadoreans-including-6-77m-children-under-18
Savia Lobo
17 Sep 2019
5 min read
Save for later

An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18

Savia Lobo
17 Sep 2019
5 min read
Data leaks have become commonplace. Every week we hear of at least one data breach that has existed maybe over months or years without the users knowing their data is compromised. Yesterday, a team of researchers from vpnMentor reported a massive data breach that may impact millions of Ecuadorians. The research team led by Noam Rotem and Ran Locar discovered a leaky Elasticsearch database that included 18GB of personal data affecting over 20 million individuals, outnumbering the total number of citizens (16.6 million) in the small South American country. The vpnMentor research team discovered the Ecuador breach as part of our large-scale web mapping project. The team further discovered the data breach on an unsecured server located in Miami, Florida. This server appears to be owned by Ecuadorian company, Novaestrat, a consulting company providing services in data analytics, strategic marketing, and software development. The major information leaked during this breach includes personal information of individuals and their family members, employment details, financial information, automotive records, and much more. The researchers said the breach was closed on September 11, 2019, and are still unaware of the exact details of the breach. However, they said that the information exposed appears to contain information provided by third-party sources.“These sources may include Ecuadorian government registries, an automotive association called Aeade, and Biess, an Ecuadorian national bank,” the researchers wrote in their official document. Details of the data exposed during the Ecuador breach The researchers said that in the database, the citizens were identified using by a ten-digit ID code. In some places in the database, that same ten-digit code is referred to as “cedula” and “cedula_ruc”. “In Ecuador, the term “cédula” or “cédula de identidad” refers to a person’s ten-digit national identification number, similar to a social security number in the US. The term “RUC” refers to Ecuador’s unique taxpayer registry. The value here may refer to a person’s taxpayer identification number,” the researchers mention. On running a search with a random ID number to check the validity of the database, the researchers were able to find a variety of sensitive personal information. Personal information such as an individuals name, gender, dates of birth, place of birth, addresses, email addresses, phone numbers, marital status, date of marriage if married, date of death if deceased, and educational details. Financial information related to accounts held with the Ecuadorian national bank, Biess. Details such as account status, the current balance in the account, amount financed, credit type, location and contact information for the person’s local Biess branch. Automotive records including car’s license plate number, make, model, date of purchase, most recent date of registration, and other technical details about the model. Employment information including employer name, employer location, employer tax identification number, job title, salary information, job start date, and end date was also exposed. ZDNet said it “verified the authenticity of this data by contacting some users listed in the database. The database was up to date, containing information as recent as 2019.” “We were able to find records for the country's president, and even Julian Assange, who once received political asylum from the small South American country, and was issued a national ID number (cedula),” ZDNet further reports. Also Read: Wikileaks founder, Julian Assange, arrested for “conspiracy to commit computer intrusion” 6.77m children’s data under the age of 18 were exposed Under a database index named "familia" (means family in Spanish), “information about every citizen's family members, such as children and parents, allowing anyone to reconstruct family trees for the entire country's population,” ZDNet reports. This index included details of children, some of whom were born as recent as this spring. They found 6.77 million entries for children under the age of 18. These entries contained names, cedulas, places of birth, home addresses, and gender. Also Read: Google faces multiple scrutinies from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices The information leaked may pose a huge risk to individuals as using their email ids and phone numbers, attackers may send them phishing emails to target individuals with scams and spam Hackers and other malicious parties could use the leaked email addresses and phone numbers to target individuals with scams and spam. Researchers said that these phishing attacks could be tailored to the individuals using exposed details to increase the chances that people will click on the links. The Ecuador breach was closed on September 11, 2019, and the database was eventually secured only after vpnMentor reached out to the Ecuador CERT (Computer Emergency Response Team) team, which served as an intermediary. A user on Hacker News writes, “There needs to be fines for when stuff like this happens. The bottom line is all that matters to bosses, so unless engineers can credibly point to the economic impact of poor security decisions, these things will keep happening.” https://twitter.com/ElissaBeth/status/1173532184935878658 To know more about the Ecuador breach in detail, read vpnMentor’s official report. Other interesting news in Security A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights
Read more
  • 0
  • 0
  • 2703