Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-hundreds-of-millions-of-facebook-users-phone-numbers-exposed-online
Fatema Patrawala
05 Sep 2019
4 min read
Save for later

Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server, TechCrunch reports

Fatema Patrawala
05 Sep 2019
4 min read
Yesterday, TechCrunch reported of an exposed server with more than 419 million records from Facebook phone numbers are discovered online. According to Zack Whittaker, TechCrunch security reporter, the server was not protected with a password and was accessible to anyone. It featured 133 million records from U.S.-based Facebook users, 18 million records from users in the UK, and 50 million records on users in Vietnam. The records contained each person's unique Facebook ID along with the phone number listed on the account. Facebook IDs are unique numbers that can be associated with an account to discover a person's username. TechCrunch was able to verify multiple records in the database by matching a known Facebook user's phone number against a listed Facebook ID. Other records were verified by matching phone numbers with Facebook's password reset feature, which can be used to partially reveal a phone number linked to an account. Records primarily had phone numbers, but in some cases, also had usernames, genders, and country location. "This dataset is old and appears to have information obtained before we made changes last year to remove people's ability to find others using their phone numbers," a Facebook spokesperson said to TechCrunch. "The dataset has been taken down and we have seen no evidence that Facebook accounts were compromised,'' they added. The database was originally discovered by security researcher and a member of GDI Foundation, Sanyam Jain, who was able to locate phone numbers associated with several celebrities as well. It's not clear who owned the database or where it originated from, but it was taken offline after TechCrunch contacted the web host. Phone number security has become increasingly important over the course of the last few years due to SIM-hacking. This technique of hacking involves calling a phone carrier and asking for a SIM transfer for a specific number, thereby giving access to anything linked to that phone number, such as two-factor verification, password reset info, and more. Leaked phone numbers also expose Facebook users to spam calls, which have become more and more prevalent over the last several years. Last week one of the security & privacy researchers, Jane Manchung Wong, in a series of tweets showed a Global Library Collector in the Facebook’s Android App code. According to Wong this GLC allows the mobile app to upload data from user’s device to Facebook servers. The tweet went viral and the general public had their say in it. https://twitter.com/wongmjane/status/1167463054709334017 Most responses received from mobile app developers said that it is a known fact and Android phones upload system libraries to Facebook server to check the app stability. And the libraries do not contain any personal data. However, this report by TechCrunch is the latest security lapse involving Facebook and user’s personal data after a string of data breach incidents since the Cambridge Analytica scandal. On Hacker News, the community expressed their distrust of Facebook’s statements. On user commented, “Facebook: "This dataset is old and appears to have information obtained before we made changes last year to remove people’s ability to find others using their phone numbers." Not that "old." Some of those "update" dates are just a few days ago.” Another user commented, “But the data appeared to be loaded into the exposed database at the end of last month — though that doesn’t necessarily mean the data is new. Somewhat curious what the Status key represents in this dump, personally.” What’s new in security this week? Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Cryptographic key of Facebook’s Free Basics app has been compromised Retadup, a malicious worm infecting 850k Windows machines, self-destructs in a joint effort by Avast and the French police  
Read more
  • 0
  • 0
  • 2301

article-image-firefox-69-allows-default-blocking-of-third-party-tracking-cookies-and-cryptomining-for-all-users
Bhagyashree R
05 Sep 2019
6 min read
Save for later

Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users

Bhagyashree R
05 Sep 2019
6 min read
On Tuesday, Mozilla announced the release of Firefox 69. This release comes with default blocking of third-party tracking cookies and cryptomining, for all users. The team has also worked on a patch to minimize power consumption by Firefox Nightly for macOS users, which will possibly land in Firefox 70. In another announcement, Mozilla shared its plans for implementing Chrome’s Manifest V3 changes. Key updates in Firefox 69 Enhanced Tracking Protection on by default for all Browser cookies are used to store your login state, website preferences, provide personalized content, and more. However, they also facilitate third-party tracking. In addition to being a threat to user privacy, they can also end up slowing down your browser, consuming your data, and creating user profiles. The tracked information and profiles can also be sold and used for purposes that you did not consent for. With the aim to prevent this, the Firefox team came up with the Enhanced Tracking Protection feature. In June this year, they made it available to new users by default. With Firefox 69, it is now on by default and set to the ‘Standard’ setting for all users. It blocks all known third-party tracking cookies that are listed by Disconnect. Protection against cryptomining and browser fingerprinting There are many other ways through which users are tracked or their resources are used without their consent. Unauthorized cryptominers run scripts to generate cryptocurrency that requires a lot of computing power. This can end up slowing down your computers and also drain your battery. There are also fingerprinting scripts that store a snapshot of your computer’s configuration when you visit a website, which can be used to track your activities across the web. To address these, the team introduced an option to block cryptominers and browser fingerprinting in  Firefox Nightly 68 and Beta 67. Firefox 69 includes the option to block cryptominers in the “Standard Mode”, which means it is on by default. To block fingerprinting users need to turn on the “Strict Mode.” We can expect the team to make it enabled by default in a future release. Read also: Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta A stricter Block Autoplay feature Starting with Firefox 69, the Block Autoplay will block all media with sound from playing automatically by default. This means that users will be able to block any video from autoplaying, not just those that autoplay with sound. Updates for Windows 10 users Firefox 69 brings support for the Web Authentication HMAC Secret extension via Windows Hello for Windows 10 users. The HMAC Secret extension will allow users to sign-in to their device even when it is offline or in airplane mode. This release also comes with Windows hints to appropriately set content process priority levels and a shortcut on the Win10 taskbar to help users easily find and launch Firefox. Improved macOS battery life Firefox 69 comes with improved battery life and download UI. To minimize battery consumption, Firefox will switch back to the low-power GPU on macOS systems that have a dual graphics card. Other updates include JIT support for ARM64 and Finder now shows download progress for files being downloaded. Not only main releases, but the team is also putting efforts into making Firefox Nightly more power-efficient. On Monday, Henrik Skupin, a senior test engineer at Mozilla, shared that there is about 3X decrease in power usage by Firefox Nightly on macOS. We can expect this change to possibly land in version 70, which is scheduled for October 22. https://twitter.com/whimboo/status/1168437524357898240 Updates for developers Debugger updates: With this release, debugging an application that has event handlers is easier. The debugger now includes the ability to automatically break when the code hits an event handler. Also, developers can now save the scripts shown in the debugger's source list pane via the Download file context menu option. The Resize Observer API: Firefox 69 supports the Resize Observer API by default. This API provides a way to monitor any changes to an element’s size. It also notifies the observer each time when the size changes. Network panel updates: The network panel will now show the resources that got blocked because of CSP or Mixed Content. This will “allow developers to best understand the impact of content blocking and ad blocking extensions given our ongoing expansion of Enhanced Tracking Protection to all users with this release,” the team writes. Re-designed about:debugging: In Firefox 69, the team has now migrated remote debugging from the old WebIDE into a re-designed about:debugging. Check out the official release notes to know what else has landed in Firefox 69. Mozilla on Google’s Manifest V3 Chrome is proposing various changes to its extension platform called Manifest V3. In a blog post shared on Tuesday, Mozilla talked about its plans for implementing these changes and how it will affect extension developers. One of the significant updates proposed in Manifest V3 is the deprecation of the blocking webRequest API, which allows extensions to intercept all inbound and outbound traffic from the browser. It then blocks, redirects, or modifies the intercepted traffic. In place of this API, Chrome is planning to introduce declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Read also: Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Explaining the impact of this proposed change if implemented, Mozilla wrote, “This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.” Mozilla further shared that it does not have any immediate plans to remove blocking WebRequest API. “We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” Mozilla wrote in the announcement. However, Mozilla is willing to consider other changes that are proposed in Manifest V3. It is planning to implement the proposal that requires content scripts to have the same permissions as the pages where they get injected. Read the official announcement to know more in detail about Mozilla’s plans regarding Manifest V3. Other news in web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 #Reactgate forces React leaders to confront community’s toxic culture head on Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 2754

article-image-containous-introduces-maesh-a-lightweight-and-simple-service-mesh-to-ease-microservices-adoption
Savia Lobo
05 Sep 2019
2 min read
Save for later

Containous introduces Maesh, a lightweight and simple Service Mesh to ease microservices adoption

Savia Lobo
05 Sep 2019
2 min read
Yesterday, Containous, a cloud-native networking company, announced Maesh, a lightweight and simple Service Mesh. Maesh is aimed at making service-to-service communications simpler for developers building modern, cloud-native applications. It is easy to use and fully featured to help developers connect, secure and monitor traffic to and from their microservices-based applications. Mesh also supports the latest Service Mesh Interface specification (SMI), a standard specification for service mesh interoperability in Kubernetes. Maesh allows developers to adopt microservices thus, improving the service mesh experience by offering an easy way to connect, secure and monitor the network traffic in any Kubernetes environment. It helps developers optimize internal traffic, visualize traffic patterns, and secure communication channels, all while improving application performance. Also Read: Red Hat announces the general availability of Red Hat OpenShift Service Mesh Maesh is designed to be completely non-invasive, allowing development teams across the organization to incrementally “opt-in” applications progressively over time. It is backed by Traefik’s rich feature-set thus, providing OpenTracing, load balancing for HTTP, gRPC, WebSocket, TCP, rich routing rules, retries and fail-overs, not to mention access controls, rate limits, and circuit breakers. Maesh can run in both TCP and HTTP mode. “In HTTP mode, Maesh leverages Traefik’s feature set to enable rich routing on virtual-host, path, headers, cookies. Using TCP mode allows seamless and easy integration with SNI routing support,” Containous team reports. It also enables critical features across any Kubernetes environment including observability, Multi-Protocol Support, Traffic Management, Security and Safety. Also Read: Mapbox introduces MARTINI, a client-side terrain mesh generation code In an email statement to us, Emile Vauge, CEO, Containous said, “With Maesh, Containous continues to innovate with the mission to drastically simplify cloud-native adoption for all enterprises. We’ve been proud of how popular Traefik has been for developers as a critical open source solution, and we’re excited to now bring them Maesh.” https://twitter.com/resouer/status/1169310994490748928 To know more about Maesh in detail, read the Containous’ Medium blog post. Other interesting news in Networking Amazon announces improved VPC networking for AWS Lambda functions Pivotal open sources kpack, a Kubernetes-native image build service Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more
Read more
  • 0
  • 0
  • 3254

article-image-espressif-iot-devices-susceptible-to-wifi-vulnerabilities-can-allow-hijackers-to-crash-devices-connected-to-enterprise-networks
Savia Lobo
05 Sep 2019
4 min read
Save for later

Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks

Savia Lobo
05 Sep 2019
4 min read
Matheus Eduardo Garbelini a member of the ASSET (Automated Systems SEcuriTy) Research Group at the Singapore University of Technology and Design released a proof of concept for three WiFi vulnerabilities in the Espressif IoT devices, ESP32/ESP8266. 3 WiFi vulnerabilities on the ESP32/8266 IoT device Zero PMK Installation (CVE-2019-12587) This WiFi vulnerability hijacks clients on version ESP32 and ESP8266 connected to enterprise networks. It allows an attacker to take control of the WiFi device EAP session by sending an EAP-Fail message in the final step during the connection between the device and the access point. The researcher discovered that both the IoT devices update their Pairwise Master Key (PMK) only when they receive an EAP-Success message. If the EAP-Fail message is received before the EAP-Success, the device skips to update the PMK received during a normal EAP exchange (EAP-PEAP, EAP-TTLS or EAP-TLS). During this time, the device normally accepts the EAPoL 4-Way handshake. Each time ESP32/ESP8266 starts, the PMK is initialized as zero, thus, if an EAP-Fail message is sent before the EAP-Success, the device uses a zero PMK. Thus allowing the attacker to hijack the connection between the AP and the device. ESP32/ESP8266 EAP client crash (CVE-2019-12586) This WiFi vulnerability is found in SDKs of ESP32 and ESP8266 and allows an attacker to precisely cause a crash in any ESP32/ESP8266 connected to an enterprise network. In combination with the zero PMK Installation vulnerability, it could increase the damages to any unpatched device. This vulnerability allows attackers in radio range to trigger a crash to any ESP device connected to an enterprise network. Espressif has fixed such a problem and committed patches for ESP32 SDK, however, the SDK and Arduino board support for ESP8266 is still unpatched. ESP8266 Beacon Frame Crash (CVE-2019-12588) In this WiFi vulnerability, CVE-2019-12588 the client 802.11 MAC implementation in Espressif ESP8266 NONOS SDK 3.0 and earlier does not correctly validate the RSN AuthKey suite list count in beacon frames, probe responses, and association responses. This allows attackers in radio range to cause a denial of service (crash) via a crafted message. Two situations in a malformed beacon frame can trigger two problems: When sending crafted 802.11 frames with the field Auth Key Management Suite Count (AKM) in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. When sending crafted 802.11 frames with the field Pairwise Cipher Suite Count in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. “The attacker sends a malformed beacon or probe response to an ESP8266 which is already connected to an access point. However, it was found that ESP8266 can crash even when there’s no connection to an AP, that is even when ESP8266 is just scanning for the AP,” the researcher says. A user on Hacker News writes, “Due to cheap price ($2—$5 depending on the model) and very low barrier to entry technically, these devices are both very popular as well as very widespread in those two categories. These chips are the first hits for searches such as "Arduino wifi module", "breadboard wifi", "IoT wifi module", and many, many more as they're the downright easiest way to add wifi to something that doesn't have it out of the box. I'm not sure how applicable these attack vectors are in the real world, but they affect a very large number of devices for sure.” To know more about this news in detail, read the Proof of Concept on GitHub. Other interesting news in IoT security Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card
Read more
  • 0
  • 0
  • 6070

article-image-android-10-releases-with-gesture-navigation-dark-theme-smart-reply-live-captioning-privacy-improvements-and-updates-to-security
Vincy Davis
04 Sep 2019
6 min read
Save for later

Android 10 releases with gesture navigation, dark theme, smart reply, live captioning, privacy improvements and updates to security

Vincy Davis
04 Sep 2019
6 min read
Yesterday, the Android team announced the official release of Android 10 for its Pixel users. It explores many new features like gesture navigation, dark theme, smart reply, live captioning, new emojis and more. Android 10 also focuses on privacy improvements and security updates. https://twitter.com/Android/status/1168935724655218691 In a blog post, the senior director of product management for Android, Stephanie Cuthbertson says, “Android 10 is built around three important themes. First, Android 10 is shaping the leading edge of mobile innovation with advanced machine-learning and support for emerging devices. Next, Android 10 has a central focus on privacy and security, with almost 50 features that give users greater protection, transparency, and control. Finally, Android 10 expands users' digital wellbeing controls so individuals and families can find a better balance with technology.” Android 10 will be rolled out in all the three generations of Pixel phones (Pixel 3, Pixel 3a, Pixel 2, and the original Pixel) immediately, while other Pixel devices will get an update over the next week. Many partner devices, including those involved in Android’s Beta program, are expected to receive an update by the end of 2019. The Android team has also released the source code for Android 10 to the Android Open Source Project (AOSP), which is a repository that offers all the necessary information to create a custom variant of an Android OS, and accessories for the Android platform. Prior to this stable release, Android 10, previously nicknamed as Android Q had six beta releases. The Android Q Beta 6 was released last month. Read Also: Google confirms and fixes 193 security vulnerabilities in Android Q Let’s have a look at some of the new features in Android 10. Gesture navigation Android 10 presents a full gesture navigation mode which showcases an ‘edge to edge’ display. This feature enables users to navigate back (left/right edge swipe), to the home screen (swipe up from the bottom), and trigger the device assistant (swipe in from the bottom corners) with only gestures, rather than buttons. “By moving to a gesture model for system navigation, we can provide more of the screen to apps to enable a more immersive experience,” says Android UI product managers Allen Huang and Rohan Shah. The Android team has provided proof that their users rely on the Black button, 50% more than the Home button. In a statement to WIRED, Google vice president of Android engineering Dave Burke said that he believes the new gesture control will make it easier for users to use a smartphone with just one hand. Dark theme Following Apple’s footsteps, Android has also introduced a Dark theme in Android 10. The system-wide dark theme will reduce power usage by a significant amount. It aims to improve visibility for users with low vision and those who are sensitive to bright light. Google applications like YouTube, Google Fit, Google Keep, and Google Calendar will be available in dark mode right away. However, Gmail and Chrome will be able to support the dark theme only by the end of this month. Android 10 also provides a Force Dark feature which will implement the dark mode, without explicitly setting a DayNight theme. It analyzes each view of a light-themed app to convert it into a dark theme, even before it is drawn to the screen. Read Also: Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience Smart Reply in notifications Android 10 uses on-device machine learning to suggest actions, called ‘smart reply’ based on the content of the notification. It may include suggestions to open applications like Maps, Chrome or Youtube depending on the context of the message received. The Android team says, “We’ve built this feature with user privacy in mind, keeping the ML processing completely on the device.” The smart reply feature is expected to work across all popular messaging apps. Users can also opt-out of this feature, if not interested. Live captioning for audios and videos in mobiles The live caption feature will caption real time videos, podcasts, audio messages, and even manual recordings, without any wifi or cell phone data. It uses a local speech analyzer technique to identify the voices on the device. Unlike other features, live caption is not ready yet, and will be launched later this year, particularly for Pixel devices only.. https://youtu.be/YL-8Xfx6S5o Foldables to extend multitasking Android 10 supports foldable devices with different folding patterns. The blog post states, “Android 10 extends multitasking across app windows and provides screen continuity to maintain your app state as the device folds or unfolds.” Android developers believe that unfolding the device to provide a larger screen will provide users with “a more immersive experience.” Unfortunately, there are no foldable phones in the market currently. https://youtu.be/4dIULf4ma_I New privacy features in Android 10 Giving users more control over location data: Users have more control over location data in Android 10. They can choose from the following three options to decide about location access to an app. All the time: The app can access location at any time While in use: The app can access location only while the app is being used/ present in the foreground Deny: The app cannot access location at all Protecting location data in network scans: Android 10 increases the protection around scanning network APIs. Some telephony, Bluetooth, Wi-Fi APIs will now need the fine location permission (ACCESS_FINE_LOCATION) in order to use several methods within the Wi-Fi, Wi-Fi Aware, or Bluetooth APIs. Preventing device tracking: Android 10 will not allow applications to access the non-resettable device identifiers which could have been used for tracking previously. Security updates in Android 10 Storage encryption: All compatible devices launching with Android 10 will have to encrypt user data. This stable version of Android presents a new encryption mode called Adiantum which provides encryption with very little performance overhead. TLS 1.3 by default: TLS 1.3 is a major revision to the TLS standard with performance benefits and enhanced security. Platform hardening:  Android 10 includes hardening for several security-critical areas of the platform, and updates to the BiometricPrompt framework which manages system-provided biometric dialog. Read Also: 25 million Android devices infected with ‘Agent Smith’, a new mobile malware Users love the new features in Android 10, especially the dark mode and live captioning. https://twitter.com/CourtMejias/status/1169073325513105408 https://twitter.com/rongbin99/status/1169087701032808448 https://twitter.com/Nooshith4/status/1169075545012756480 Few users found the new gesture navigation feature of Android 10 similar to iPhones. https://twitter.com/r3dash/status/1169095752271876097 Interested readers can check out the Android 10 website and the Android Developers blog for more information about Android 10. Other news in Tech Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices
Read more
  • 0
  • 0
  • 3121

article-image-mongodb-atlas-will-be-available-on-microsoft-azure-marketplace-and-will-be-a-part-of-microsofts-partner-reported-program
Amrata Joshi
04 Sep 2019
2 min read
Save for later

MongoDB Atlas will be available on Microsoft Azure Marketplace and will be a part of Microsoft’s Partner Reported program

Amrata Joshi
04 Sep 2019
2 min read
Yesterday, the team at MongoDB, the general-purpose data platform announced the availability of MongoDB Atlas on Microsoft Azure Marketplace. The team further announced that they are set to be a part of Microsoft’s strategic Partner Reported ACR co-sell program. https://twitter.com/MongoDB/status/1168946141200883713 MongoDB Atlas on Azure integrates with Azure services including Azure Databricks, PowerBI, and Sitecore on Azure. With the availability of MongoDB Atlas on Azure Marketplace it will now be easy for the established Azure customers to purchase MongoDB Atlas. Also, the cost for Atlas will be integrated into a customer’s Azure bill resulting into a single payment.  The Atlas is now available across 26 Azure regions and provides service to thousands of customers that are dependent on MongoDB Atlas for driving their business.  Dev Ittycheria, President and CEO, MongoDB, said, “Microsoft has been a leader in making it easier for customers to consume and pay for cloud services, which are driving transformative innovations across many organizations.”  Ittycheria further added, “We are excited about the latest step in our strategic go-to-market partnership with Microsoft which will help bring MongoDB Atlas to the growing ecosystem of Azure Marketplace customers.” Scott Guthrie, Executive Vice President of Cloud and AI, Microsoft said, “Since launching on Azure in 2017, MongoDB Atlas has been a popular service running on Azure. Today’s announcement will make it even easier for customers to consume Atlas on Azure through the Azure Marketplace. We are committed to working alongside partners like MongoDB to give our joint customers best of breed choice in technology that meets their unique business demands.” What’s new in data this week? How to learn data science: from data mining to machine learning LXD releases Dqlite 1.0, a C library to implement an embeddable, persistent SQL database engine with Raft consensus After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption  
Read more
  • 0
  • 0
  • 3136
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-amazon-announces-improved-vpc-networking-for-aws-lambda-functions
Amrata Joshi
04 Sep 2019
3 min read
Save for later

Amazon announces improved VPC networking for AWS Lambda functions

Amrata Joshi
04 Sep 2019
3 min read
Yesterday, the team at Amazon announced improved VPC (Virtual Private Cloud) networking for AWS Lambda functions. It is a major improvement on how AWS Lambda function will work with Amazon VPC networks.  In case a Lambda function is not configured to connect to your VPCs then the function can access anything available on the public internet including other AWS services, HTTPS endpoints for APIs, or endpoints and services outside AWS. So, the function has no way to connect to your private resources that are inside your VPC. When the Lambda function is configured to connect to your own VPC, it creates an elastic network interface within the VPC and does a cross-account attachment. Image Source: Amazon These Lambda functions run inside the Lambda service’s VPC but they can only access resources over the network with the help of your VPC. But in this case, the user still won’t have direct network access to the execution environment where the functions run. What has changed in the new model? AWS Hyperplane for providing NAT (Network Address Translation) capabilities  The team is using AWS Hyperplane, the Network Function Virtualization platform that is used for Network Load Balancer and NAT Gateway. It also has supported inter-VPC connectivity for AWS PrivateLink. With the help of Hyperplane the team will provide NAT capabilities from the Lambda VPC to customer VPCs. Network interfaces within VPC are mapped to the Hyperplane ENI The Hyperplane ENI (Elastic Network Interfaces), a network resource controlled by the Lambda service, allows multiple execution environments to securely access resources within the VPCs in your account. So, in the previous model, the network interfaces in your VPC were directly mapped to Lambda execution environments. But in this case, the network interfaces within your VPC are mapped to the Hyperplane ENI. Image Source: Amazon How is Hyperplane useful? To reduce latency When a function is invoked, the execution environment now uses the pre-created network interface and establishes a network tunnel to it which reduces the latency. To reuse network interface cross functions Each of the unique security group:subnet combination across functions in your account needs a distinct network interface. If such a combination is shared across multiple functions in your account, it is now possible to reuse the same network interface across functions. What remains unchanged? AWS Lambda functions will still need the IAM permissions for creating and deleting network interfaces in your VPC. Users can still control the subnet and security group configurations of the network interfaces.  Users still need to use a NAT device(for example VPC NAT Gateway) for giving a function internet access or for using VPC endpoints to connect to services outside of their VPC. The types of resources that your functions can access within the VPCs still remain the same. The official post reads, “These changes in how we connect with your VPCs improve the performance and scale for your Lambda functions. They enable you to harness the full power of serverless architectures.” To know more about this news, check out the official post. What’s new in cloud & networking this week? Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models  
Read more
  • 0
  • 0
  • 6239

article-image-laravel-6-0-releases-with-laravel-vapor-compatibility-lazycollection-improved-authorization-response-and-more
Fatema Patrawala
04 Sep 2019
2 min read
Save for later

Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more

Fatema Patrawala
04 Sep 2019
2 min read
Laravel 6.0 releases with improvements in Laravel 5.8. Introduction to semantic versioning, compatibility with Laravel Vapor, improved authorization responses, job middleware, lazy collections, sub-query improvements, the extraction of frontend scaffolding to the laravel/ui Composer package, and a variety of other bug fixes and usability improvements. Key features in Laravel 6.0 Semantic versioning The Laravel framework package now follows the semantic versioning standard. This makes the framework consistent with the other first-party Laravel packages which already followed this versioning standard. Laravel vapor compatibility Laravel 6.0 provides compatibility with Laravel Vapor, an auto-scaling serverless deployment platform for Laravel. Vapor abstracts the complexity of managing Laravel applications on AWS Lambda, as well as interfacing those applications with SQS queues, databases, Redis clusters, networks, CloudFront CDN, and more. Improved exceptions via ignition Laravel 6.0 ships with Ignition, which is a new open source exception detail page. Ignition offers many benefits over previous releases, such as improved Blade error file and line number handling, runnable solutions for common problems, code editing, exception sharing, and an improved UX. Improved authorization responses In previous releases of Laravel, it was difficult to retrieve and expose custom authorization messages to end users. This made it difficult to explain to end-users exactly why a particular request was denied. In Laravel 6.0, this is now easier using authorization response messages and the new Gate::inspect method. Job middleware Job middleware allows developers to wrap custom logic around the execution of queued jobs, reducing boilerplate in the jobs themselves. Lazy collections Many developers already enjoy Laravel's powerful Collection methods. To supplement the already powerful Collection class, Laravel 6.0 has introduced a LazyCollection, which leverages PHP's generators to allow users to work with very large datasets while keeping memory usage low. Eloquent subquery enhancements Laravel 6.0 introduces several new enhancements and improvements to database subquery support. To know more about this release, check out the official Laravel blog page. What’s new in web development this week? Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 3760

article-image-over-47k-supermicro-servers-bmcs-are-prone-to-usbanywhere-a-remote-virtual-media-vulnerability
Savia Lobo
04 Sep 2019
5 min read
Save for later

Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability

Savia Lobo
04 Sep 2019
5 min read
Update: On September 4, 2019, Supermicro released security updates to address vulnerabilities affecting the Baseboard Management Controller (BMC). Administrators can review Supermicro’s Security Advisory and Security Vulnerabilities Table and apply the necessary updates and recommended mitigations.  A cybersecurity firm, Eclypsium reported yesterday that over 47K Supermicro servers have been detected with new vulnerabilities dubbed ‘USBAnywhere’ in their baseboard management controllers (BMCs). These vulnerabilities “allow an attacker to easily connect to a server and virtually mount any USB device of their choosing to the server, remotely over any network, including the Internet,” Eclypsium mention in their official report. Also Read: iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows Issues with BMCs on various Supermicro platforms The problem arises because of how BMCs on Supermicro X9, X10 and X11 platforms implement virtual media; i.e. they remotely connect a disk image as a virtual USB CD-ROM or floppy drive. On accessing the virtual media service remotely, it allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass. Thus, these issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials and in some cases, without any credentials at all. After the connection is established, the virtual media service allows the attacker to interact with the host system as a raw USB device. This means attackers can attack the server in the same way as if they had physical access to a USB port, such as loading a new operating system image or using a keyboard and mouse to modify the server, implant malware, or even disable the device entirely. The combination of easy access and straightforward attack avenues can allow unsophisticated attackers to remotely attack some of an organization’s most valuable assets. Analysis of the remote USB authentication A user can gain access to the virtual media service via a small Java application served on the BMC’s web interface. Further, the Java application connects to the service by listening on TCP port 623 on the BMC. The service, on the other hand, uses a custom packet-based format to authenticate the client and transport USB packets between client and server. The Eclypsium team analyzed this authentication process and have revealed some issues with it, including: Plaintext Authentication: While the Java application uses a unique session ID for authentication, the service also allows the client to use a plaintext username and password.  Unencrypted network traffic: Encryption is available but must be requested by the client. The Java application provided with the affected systems use this encryption for the initial authentication packet but then use unencrypted packets for all other traffic.  Weak encryption: When encryption is used, the payload is encrypted with RC4 using a fixed key compiled into the BMC firmware. This key is shared across all Supermicro BMCs. RC4 has multiple published cryptographic weaknesses and has been prohibited from use in TLS (RFC7465). Authentication Bypass (X10 and X11 platforms only): After a client has properly authenticated to the virtual media service and then disconnected, some of the service’s internal state about that client is incorrectly left intact. As the internal state is linked to the client’s socket file descriptor number, a new client that happens to be assigned the same socket file descriptor number by the BMC’s OS inherits this internal state. In practice, this allows the new client to inherit the previous client’s authorization even when the new client attempts to authenticate with incorrect credentials. The report highlights, “A scan of TCP port 623 across the Internet revealed 47,339 BMCs from over 90 different countries with the affected virtual media service publicly accessible.” Source: Eclypsium.com Eclypsium first reported the vulnerability to Supermicro on June 19 and some more additional findings on July 9. Further, on July 29, Supermicro acknowledged the report and developed a fix. On learning that a lot of systems were affected by this vulnerability, Eclypsium notified CERT/CC of the issue, twice in August. On August 23, Eclypsium notified network operators whose networks contain affected, Internet-accessible BMCs. Supermicro also confirmed its intent to publicly release firmware by September 3rd, on August 16. In order to secure the BMCs, the ones “that are not exposed to the Internet should also be carefully monitored for vulnerabilities and threats. While organizations are often fastidious at applying patches for their software and operating systems, the same is often not true for the firmware in their servers,” the report suggests. “Just as applying application and OS security updates has become a critical part of maintaining IT infrastructure, keeping abreast of firmware security updates and deploying them regularly is required to defend against casual attacks targeting system firmware,” Eclypsium further suggests. Also Read: What’s new in USB4? Transfer speeds of upto 40GB/second with Thunderbolt 3 and more As mitigation to this issue, the company suggests that along with the vendor-supplied updates, organizations should also adopt tools to proactively ensure the integrity of their firmware and identify vulnerabilities, missing protections, and any malicious implants in their firmware. A user on Hacker News writes, “BMC's (or the equivalent for whatever vendor you are using) should never be exposed to the internet- they shouldn't even be on the same network as the rest of the server. Generally speaking. I put them on a completely separate network that has to be VPN'd into explicitly. Having BMC access is as close to having physical access as you can get without actually touching the machine.” To know more about this news in detail, read Eclypsium’s official report on USBAnywhere. Other news in security attacks A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes GitHub now supports two-factor authentication with security keys using the WebAuthn API New Bluetooth vulnerability, KNOB attack can manipulate the data transferred between two paired devices
Read more
  • 0
  • 0
  • 1991

article-image-microsoft-introduces-static-typescript-as-an-alternative-to-embedded-interpreters-for-programming-mcu-based-devices
Sugandha Lahoti
04 Sep 2019
4 min read
Save for later

Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices

Sugandha Lahoti
04 Sep 2019
4 min read
Microsoft yesterday unveiled Static TypeScript as an alternative to embedded interpreters. Static TypeScript (STS) is an implementation of a Static Compiler for TypeScript which runs in the web browser. It is primarily designed to aid school children in their computer science programming projects. STS is supported by a compiler that is also written in Typescript. It generates machine code that runs efficiently on Microcontrollers in the target RAM range of 16-256kB. Microsoft’s plan behind building Static TypeScript Microcontrollers are typically programmed in C, C++, or in assembly, none of which are particularly beginner friendly. MCUs that can run on modern languages such as JavaScript and Python usually involve interpreters like IoT.js, Duktape, or MicroPython. The problem with interpreters is high memory usage, leaving little room on the devices themselves for the program developers have written. Microsoft therefore decided to come with STS which is a more efficient alternative to the embedded interpreter approach. It is statically typed, which makes for a less surprising programming experience. Features of Static TypeScript STS eliminates most of the “bad parts” of JavaScript; following StrongScript, STS uses nominal typing for statically declared classes and supports efficient compilation of classes using classic techniques for vtables. The STS toolchain runs offline, once loaded into a web browser, without the need for a C/C++ compiler. The STS compiler generates efficient and compact machine code, which unlocks a range of application domains such as game programming for low resource devices . Deployment of STS user programs to embedded devices does not require app or device driver installation, just access to a web browser. The relatively simple compilation scheme for STS leads to surprisingly good performance on a collection of small JavaScript benchmarks, often comparable to advanced, state of the art JIT compilers like V8, with orders of magnitude smaller memory requirements. Differences with TypeScript In contrast to TypeScript, where all object types are bags of properties, STS has at runtime four kinds of unrelated object types: A dynamic map type has named (string-indexed) properties that can hold values of any type A function (closure) type A class type describes instances of a class, which are treated nominally, via an efficient runtime subtype check on each field/method access An array (collection) type STS Compiler and Runtime The STS compiler and toolchain (linker, etc.) are written solely in TypeScript. The source TypeScript program is processed by the regular TypeScript compiler to perform syntactic and semantic analysis, including type checking. The STS device runtime is mainly written in C++ and includes a bespoke garbage collector. The regular TypeScript compiler, the STS code generators, assembler, and linker are all implemented in TypeScript and run both in the web browser and on command line.  The STS toolchain, implemented in TypeScript, compiles STS to Thumb machine code and links this code against a pre-compiled C++ runtime in the browser, which is often the only available execution environment in schools. Static TypeScript is used in all MakeCode editors STS is the core language supported by Microsoft’s MakeCode Framework. MakeCode provides hands on computing education for students with projects. It enables the creation of custom programming experiences for MCU-based devices. Each MakeCode editor targets programming of a specific device or device class via STS. STS supports the concept of a package, a collection of STS, C++ and assembly files, that also can list other packages as dependencies. This capability has been used by third parties to extend the MakeCode editors, mainly to accommodate hardware peripherals for various boards. STS is also used in MakeCode Arcade. With Arcade, STS lets developers of all skill levels easily write cool retro-style pixelated games. The games are designed by the user to be run either inside a virtual game console in the browser or on inexpensive microcontroller-based handhelds. For more in-depth information, please read the research paper. People were quite interested in this development. A comment on Hacker News reads, “This looks very interesting. If all it takes is dropping “with, eval, and prototype inheritance” to get fast and efficient JS execution, I’m all for it.” Other news in tech TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support and more Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs
Read more
  • 0
  • 0
  • 4099
article-image-llvm-9-0-rc3-is-now-out-with-official-risc-v-support-updates-to-systemz-and-more
Bhagyashree R
04 Sep 2019
3 min read
Save for later

LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more

Bhagyashree R
04 Sep 2019
3 min read
Last week, the LLVM team announced the release of LLVM 9.0 RC3, which fixes all the known release blockers. LLVM 9.0 missed its planned release date, which was 28th August. However, with the third RC out, we can expect it to be released soon in the coming weeks along with subprojects like Clang 9.0. LLVM 9.0 will include features like RISC-V official support, gfx10 support for AMDGPU compiler backend, among others. Announcing the release, the team shared on the LLVM mailing list, “There are currently no open release blockers, which means if nothing new comes up, the final release could ship soon and this is what it would look like (except for more release notes, which are still very welcome).” What’s new coming in LLVM 9.0 Official support for RISC-V target In July this year, Alex Bradbury, CTO and Co-Founder of the lowRISC project proposed to make the “experimental” RISC-V LLVM backend “official” for LLVM 9.0. This essentially means that starting with this release, the RISC-V backend will be built by default for LLVM. Developers will be able to use it for standard LLVM/Clang builds out of the box. Explaining the reason behind this update, Bradbury wrote in the proposal, “As well as being more convenient for end users, this also makes it significantly easier for e.g. Rust/Julia/Swift and other languages using LLVM for code generation to do so using the system-provided LLVM libraries. This will make life easier for those working on RISC-V ports of Linux distros encountering issues with Rust dependencies.” Updates to the SystemZ target Starting from LLVM 9.0, the SystemZ target will support the ‘arch13’ architecture. It will include builtins for the new vector instructions, which can be enabled using the ‘-mzvector’ option. The compiler will also support and automatically generate alignment hints on vector load and store instructions. Updates to the AMDGPU target In LLVM 9.0, the function call support is enabled by default. Other updates include improved support for 96-bit loads and stores, gfx10 support, and DPP combiner pass enabled by default. Updates to LLDB LLVM 9.0 will be the last release that will include ‘lldb-mi’ as part of LLDB, however, it will still be available in a downstream GitHub repository. Other changes include color highlighted backtraces and support for DWARF4 (debug_types) and DWARF5 (debug_info) type units. To read the entire list of updates in LLVM 9.0, check out the official release notes. LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces LLVM 8.0.0 releases!  
Read more
  • 0
  • 0
  • 2439

article-image-lxd-releases-dqlite-1-0-a-c-library-to-implement-an-embeddable-persistent-sql-database-engine-with-raft-consensus
Bhagyashree R
04 Sep 2019
5 min read
Save for later

LXD releases Dqlite 1.0, a C library to implement an embeddable, persistent SQL database engine with Raft consensus

Bhagyashree R
04 Sep 2019
5 min read
Dqlite (distributed SQLite) is created by the LXD team at Canonical, the company behind Ubuntu. It is a library written in C that implements a “fast, embedded, persistent SQL database” engine, which offers high-availability and automatic failover. Last week, the team released its Dqlite 1.0 version. It is open-sourced under Apache 2.0 and runs on ARM, X86, POWER, and IBM Z architectures. Dqlite is written in C to provide maximum cross-platform portability. Its first prototype was implemented in Go but was later rewritten in C because of some performance problems caused due to the way Go interoperates with C. The team explains, “Go considers a function call into C that lasts more than ~20 microseconds as a blocking system call, in that case it will put the goroutine running that C call in waiting queue and resuming it will effectively cause a context switch, degrading performance (since there were a lot of them happening). The added benefit of the rewrite in C is that it’s now easy to embed dqlite into project written in effectively any language, since all major languages have provisions to create C bindings.” How Dqlite works Dqlite extends SQLite with a network protocol that connects various instances of an application and also has them act as a highly-available cluster, without any dependency on external databases. To achieve this, it depends on C-Raft, an implementation of the Raft consensus algorithm in C. This not only provides high-performance transactional consensus and fault tolerance but also preserves SQLite’s efficiency and tiny footprint. To reach consensus, Raft uses the concept of an “elected leader.” In a Raft cluster, a server can either be a leader or a follower. The cluster can have only one elected leader that will be fully responsible for log replication on the followers. In the case of Dqlite, this means that only the leader can write new Write-Ahead Logging (WAL) frames. So, any attempt to perform a write transaction on a follower node will fail with an ErrNotLeader error, in which case clients will be required to retry against whoever is the new leader. The team recommends Dqlite for the cases when you don’t want any dependency on an external database, but want your application to be highly available, for instance, IoT and Edge devices. Currently, it is being used by the LXD system containers manager. It uses Dqlite to implement high-availability when running in cluster mode. Read also: LXD 3.15 releases with a switch to dqlite 1.0 branch, new hardware VLAN and MAC filtering on SR-IOV and more! What developers are saying about Dqlite This triggered a discussion on Hacker News. A developer recommended the usage of D or Rust for Dqlite’s implementation. “They could also use D or Rust for this. If borrow-checker is too much, Rust can still do automatic, memory management with other benefits remaining. Both also support letting specific modules be unsafe where performance is critical.” Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Others compared it with rqlite, which is a lightweight, distributed relational database that uses SQLite as its storage engine. One main difference that many pointed out was that Dqlite is a library, whereas, rqlite is a full application. Giving a more in-depth comparison between the two, a developer commented, “rqlite's replication is command based whereas dqlite is/was WAL frame-based -- so basically one ships the command and the other ships WAL frames. This distinction means that non-deterministic commands (ex. `RANDOM()`) will work differently.” Apart from these, Dqlite’s team also listed the difference between Dqlite and rqlite. Among the main differences are, Dqlite is “embeddable in any language that can interoperate with C.” It provides “full support for transactions” and there is “no need for statements to be deterministic.” A major point of discussion was its use cases. A user commented explaining where Dqlite can find its use, “So an easy use-case that springs to mind is any sort of distributed IoT device that needs to track state. So any industrial or consumer monitoring system with a centralized controller that would use this for data storage. Specifically, this enables the use of multiple nodes for high throughput imagine many, many, many sensors and a central controller streaming real-time data.” A developer who has used the Dqlite library shared their review, “I used Dqlite for a side project, which replicates some of the features of LXD. Was relatively easy to use, but Dqlite moves at some pace and trying to keep up is quite "interesting". Anyway once I do end up getting time, I'm sure it'll be advantageous to what I'm doing.” To read more about Dqlite, check out its official website. Other news in database GraphQL API is now generally available Amazon Aurora makes PostgreSQL Serverless generally available The road to Cassandra 4.0 – What does the future have in store?  
Read more
  • 0
  • 0
  • 2531

article-image-after-red-hat-homebrew-removes-mongodb-from-core-formulas-due-to-its-server-side-public-license-adoption
Vincy Davis
03 Sep 2019
3 min read
Save for later

After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption

Vincy Davis
03 Sep 2019
3 min read
In October, last year MongoDB announced that it’s switching to Server Side Public License (SSPL). Since then, Redhat dropped support for MongoDB in January from its Red Hat Enterprise Linux and Fedora. Now, Homebrew, a popular package manager for macOS has removed MongoDB from the Homebrew core formulas since MongoDB was migrated to a non open-source license. Yesterday, FX Coudert, a Homebrew member announced this news on Twitter. https://twitter.com/fxcoudert/status/1168493202762096643 In a post on GitHub, Coudert clearly mentions that MangoDB’s migration to ‘non open-source license’ is the reason behind this resolution. Since, SSPL is not OSI-approved, it cannot be included in homebrew-core. Also, mongodb and [email protected] do not build from source on any of the 3 macOS versions, so they are also removed along with mongodb 3, 3.2, and 3.4. He adds that it would make little sense to keep older, unmaintained versions. Coudert also added that the percona-server-mongodb which also comes under the SSPL is removed from the Homebrew core formulas. Upstream continues to maintain the custom Homebrew “official” tap for the latest versions of MongoDB. Earlier, Homebrew project leader, MikeMcQuaid had commented on Github that MongoDB was their 45th most popular formula and should not be removed as it will break things for many people. Coudert countered this by replying that since MongoDB is not open source anymore, it does not belong in Homebrew core. He added, that since upstream is providing a tap with their official version, users can have the latest (instead of our old unmaintained version). “We will have to remove it at some point, because it will bit rot and break. It's just a question of whether we do that now, or keep users with the old version for a bit longer,” he specified. MongoDB’s past controversies due to SSPL In January this year, MongoDB received its first major blow when Red Hat dropped MongoDB over concerns related to its SSPL. Tom Callaway, the University outreach Team lead at Red Hat had said that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be “Free” or “Open Source” causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk.” Subsequently, in February, Red Hat Satellite also decided to drop MongoDB and support PostgreSQL backend only. The Red Hat development team stated that PostgreSQL is a better solution in terms of the types of data and usage that Satellite requires. In March, following all these changes, MongoDB withdrew the SSPL from the Open Source Initiative’s approval process. It was finally decided that SSPL will only require commercial users to open source their modified code, which means that any other user can still modify and use MongoDB code for free. Check this space for new announcements and updates regarding Homebrew and MongoDB. Other related news in Tech How to ace a data science interview Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 3958
article-image-introducing-cue-an-open-source-data-constraint-language-that-merges-types-and-values-into-a-single-concept
Bhagyashree R
03 Sep 2019
4 min read
Save for later

Introducing CUE, an open-source data constraint language that merges types and values into a single concept

Bhagyashree R
03 Sep 2019
4 min read
Inspired by Google’s General Configuration Language (GCL), a team of developers has now come up with a new language called CUE. It is an open-source data validation language, which aims to simplify tasks that involve defining and using data. Its applications include data validation, data templating, configuration, querying, code generation and even scripting. There are two core aspects of CUE that set it apart from other programming or configuration languages. One, it considers types as values and second, these values are ordered into a lattice, a partially ordered set. Explaining the concept behind CUE the developers write, “CUE merges the notion of schema and data. The same CUE definition can simultaneously be used for validating data and act as a template to reduce boilerplate. Schema definition is enriched with fine-grained value definitions and default values. At the same time, data can be simplified by removing values implied by such detailed definitions. The merging of these two concepts enables many tasks to be handled in a principled way.” These two properties account for the various advantages CUE provides: Advantages of using CUE Improved typing capabilities Most configuration languages today focus mainly on reducing boilerplate and provide minimal typing support. CUE offers “expressive yet intuitive and compact” typing capabilities by unifying types and values. Enhanced readability It enhances readability by allowing the application of a single definition in one file to values in many other files. So, developers need not open various files to verify validity. Data validation You get a straightforward way to define and verify schema in the form of the ‘cue’ command-line tool. You can also use CUE constraints to verify document-oriented databases such as Mongo. Read also: MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more! Easily validate backward compatibility With CUE, you can easily verify whether a newer version of the schema is backward compatible with an older one. CUE considers an API backward compatible if it subsumes the older one or if the old one is an instance of the new one. Allows combining constraints from different sources CUE is commutative, which means you can combine constraints from various sources such as base template, code, client policies, and that too in any order. Allows normalization of data definitions Combining constraints from many resources can also result in a lot of redundancy. CUE’s logical inference engine addresses this by automatically reducing constraints. Its API allows computing and selecting between different normal forms to optimize for a certain representation. Code generation and extraction Currently, CUE can extract definitions from Go code and Protobuf definitions. It facilitates the use of existing sources or smoother transition to CUE by allowing the annotation of existing sources with CUE expressions. Querying data CUE constraints can be used to find patterns in data. You can perform more elaborate querying by using a ‘find’ or ‘query’ subcommand. You can also query data programmatically using the CUE API. On a Hacker News discussion about CUE, many developers compared it with Jsonnet, which a data templating language. A user wrote, “It looks like an alternative to Jsonnet which has schema validation & strict types. IMO, Jsonnet syntax is much simpler, it already has integration with IDEs such as VSCode and Intellij and it has enough traction already. Cue seems like an e2e solution so it's not only an alternative to Jsonnet, it also removes the need of JSON Schema, OpenAPI, etc. so given that it's a 5 months old project, still has too much time to evolve and mature.” Another user added, “CUE improves in Jsonnet in primarily two areas, I think: Making composition better (it's order-independent and therefore consistent), and adding schemas. Both Jsonnet and CUE have their origin in GCL internally at Google. Jsonnet is basically GCL, as I understand it. But CUE is a whole new thing.” Others also praised its features. “When you consider the use of this language within a distributed system it's pretty freaking brilliant,” a user commented. Another user added, “I feel like that validation feature could theoretically save a lot of people that occasional 1 hour of their time that was wasted because of a typo in a config file leading to a cryptic error message.” Read more about CUE and its concepts in detail, on its official website. Other news in Programming languages ‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more!
Read more
  • 0
  • 0
  • 2708

article-image-datadog-releases-ddsketch-fully-mergeable-relative-error-quantile-sketching-algorithm
Sugandha Lahoti
03 Sep 2019
4 min read
Save for later

Datadog releases DDSketch, a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees

Sugandha Lahoti
03 Sep 2019
4 min read
Datadog, the monitoring, and analytics platform released DDSketch (Distributed Distribution Sketch) which is a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees. It was presented at VLDB2019 in August. DDSketch is fully-mergeable and relative-error quantile sketching algorithm Per Wikipedia, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Calculating exact Quantiles can be expensive for both storage and network bandwidth. So, most monitoring systems compress the data into sketches and compute approximate quantiles. However, maintaining Quantile sketches has primarily been done on bounding the rank error of the sketch while using little memory. Unfortunately, for data sets with heavy tails, rank-error guarantees can return values with large relative errors. Also, quantile sketches should be mergeable which means that several combined sketches must be as accurate as a single sketch of the same data. These two problems are addressed in DDSketch which comes with formal guarantees, is also fully-mergeable and has relative-error sketching. The sketch is extremely fast as well as accurate and is currently being used by Datadog. How DDSketch works As mentioned earlier, DDSketch has relative error guarantees. This means it computes quantiles with a controlled relative error. For example, for a DDSketch with a relative accuracy guarantee set to 1% and expected quantile value set to 100, the computed quantile value is guaranteed to be between 99 and 101. If the expected quantile value is 1000, the computed quantile value is guaranteed to be between 990 and 1010. DDSketch works by mapping floating-point input values to bins and counting the number of values for each bin. The mapping to bins is handled by IndexMapping, while the underlying structure that keeps track of bin counts is Store. The memory size of the sketch depends on the range that is covered by the input values; the larger the range, the more bins are needed to keep track of the input values. As a rough estimate, when working on durations using standard parameters (mapping and store) with a relative accuracy of 2%, about 2.2kB (297 bins) are needed to cover values between 1 millisecond and 1 minute, and about 6.8kB (867 bins) to cover values between 1 nanosecond and 1 day. DDSketch implementations and comparisons Datadog has provided implementations of DDSketch in Java, Go, and Python. The Java implementation provides multiple versions of DDS. They have also compared DDSketch against the Java implementation of HDR Histogram, the Java implementation of the GKArray version of the GK sketch, as well as the Java implementation of the Moments sketch. HDR Histogram HDR Histogram is the only relative-error sketch in the literature. It has extremely fast insertion times (only requiring low-level binary operations), as the bucket sizes are optimized for insertion speed instead of size, and it is fully mergeable (though very slow). The main downside, the researchers say, is that it can only handle a bounded (though very large) range that might not be suitable for certain data sets. It also has no published guarantees, though the researchers agree that much of the analysis presented for DDSketch can be made to apply to a version of HDR Histogram that more closely resembles DDSketch with a slightly worse guarantee. Moments sketch The Moments sketch takes an entirely different approach by estimating the moments of the underlying distribution. It has notably fast merging times and is fully mergeable. The guaranteed accuracy, however, is only for the average rank error, unlike other sketches which have guarantees for the worst-case error (whether rank or relative) GK sketch Compared to GK, the relative accuracy of DDSketch is comparable for dense data sets, while for heavy-tailed data sets the improvement in accuracy can be measured in orders of magnitude. The rank error is also comparable to if not better than that of GK. Additionally, it is much faster in both insertion and merge. Note: All images are taken from the research paper. For more technical coverage, please read the research paper. In other related news, late August, Datadog announced that it has filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission relating to a proposed initial public offering of its Class A common stock. The firm listed a $100 million raise in its prospectus, a provisional number that will change when the company sets a price range for its equity. Other news in Tech Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?
Read more
  • 0
  • 0
  • 3365