Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-nordvpn-reveals-it-was-affected-by-a-data-breach-in-2018
Savia Lobo
22 Oct 2019
3 min read
Save for later

NordVPN reveals it was affected by a data breach in 2018

Savia Lobo
22 Oct 2019
3 min read
NordVPN, a popular Virtual Private Network revealed that it was subject to a data breach in 2018. The breach came to light a few months ago when an expired internal security key was exposed, allowing anyone outside the company unauthorized access. NordVPN did not inform users then as they wanted to be "100 percent sure that each component within our infrastructure is secure." Details of the breach were traced back to March 2018 when one of NordVPN’s data centers in Finland, from whom they rent their servers from showed signs of unauthorized access. The attacker gained access to the server by exploiting an unsecured remote management system by the provider. In a press release statement, NordVPN explained "only 1 of more than 3000 servers we had at the time was affected." and that the company immediately terminated its contract with the data center provider after it learned of the hack. Even though the company had intrusion detection systems installed to find data breaches, it could not predict a remote data management system left by the data center provider. On the other hand, NordVPN said it was unaware that such a system existed. The company also said, "We are taking all the necessary means to enhance our security. We have undergone an application security audit, are working on a second no-logs audit right now, and are preparing a bug bounty program." They further added, "We will give our all to maximize the security of every aspect of our service, and next year we will launch an independent external audit ... of our infrastructure to make sure we did not miss anything else." NordVPN said that the attacker did not gain access to activity logs, user-credentials, or any other sensitive information. NordVPN maintains what it says is a strict "zero logs" policy. "We don’t track, collect, or share your private data," the company says on its website. In a statement to TechCrunch, NordVPN spokesperson Laura Tyrell said, “The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either.” She further added, “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.” Based on a few records posted online, other VPN providers such as TorGuard and VikingVPN may have also been compromised. A spokesperson for TorGuard told TechCrunch that a “single server” was compromised in 2017 but denied that any VPN traffic was accessed. Users are furious that NordVPN did not inform them on time. https://twitter.com/figalmighty/status/1186566775330066432 https://twitter.com/bleepsec/status/1186557192549404672 To know more about this news in detail, you can read NordVPN’s complete press release. DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants StockX confirms a data breach impacting 6.8 million customers Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator
Read more
  • 0
  • 0
  • 4878

article-image-alexa-and-google-assistant-can-eavesdrop-or-vish-unsuspecting-users
Sugandha Lahoti
22 Oct 2019
3 min read
Save for later

Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs

Sugandha Lahoti
22 Oct 2019
3 min read
In a new study security researchers from SRLabs have exposed a serious vulnerability - Smart Spies attack in smart speakers from Amazon and Google. According to SRLabs, smart speaker voice apps - Skills for Alexa and Actions on Google Home can be abused to eavesdrop on users or vish (voice-phish) their passwords. The researchers demonstrated that with Smart Spies attack they can get these smart speakers to silently record users or ask their Google account passwords by simply uploading a malicious software disguised as Alexa skill or Google action. The SRLabs team added "�. " (U+D801, dot, space) character sequence to various locations inside the backend of a normal Alexa/Google Home app. They tell a user that an app has failed, insert the "�. " to induce a long pause, and then prompt the user with the phishing message after a few minutes. This tricks users into believing the phishing message has nothing to do with the previous app with which they interacted. Using this sequence, the voice assistants kept on listening for much longer than usual for further commands. Anything the user says is then automatically transcribed and can be sent directly to the hacker. This revelation of Smart Spies attack is unsurprising considering Alexa and Google Home were found phishing and eavesdropping before. In June of this year, two lawsuits were filed in Seattle that allege that Amazon is recording voiceprints of children using its Alexa devices without their consent. Later, Amazon employees were found listening to Echo audio recordings, followed by Google’s language experts doing the same. SRLabs researchers urge users to be more aware of Smart Spies attack and the potential of malicious voice apps that abuse their smart speakers. They caution users to be more aware of third-party app sources while installing a new voice app on their speakers. Measures suggested to Google and Amazon to avoid Smart Spies attack Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores. The voice app review needs to check explicitly for copies of built-in intents. Unpronounceable characters like “�. “ and silent SSML messages should be removed to prevent arbitrary long pauses in the speakers’ output. Suspicious output texts including “password“ deserve particular attention or should be disallowed completely. In a statement provided to Ars Technica, Amazon said it has put new mitigations in place to prevent and detect skills from being able to do this kind of thing in the future. It said that it takes down skills whenever this kind of behavior is identified. Google also told Ars Technica that it has review processes to detect this kind of behavior, and has removed the actions created by the security researchers. The company is conducting an internal review of all third-party actions, and has temporarily disabled some actions while this is taking place. On Twitter people condemned Google and Amazon and cautioned others not to buy their smart speakers. https://twitter.com/ClaudeRdCardiff/status/1186577801459187712 https://twitter.com/Jake_Hanrahan/status/1186082128095825920 For more information, read the blog post on Smart Spies attack by SRLabs. Google’s language experts are listening to some recordings from its AI assistant Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 2596

article-image-elm-0-19-1-releases-improved-syntax-error-messages-elm-compiler
Sugandha Lahoti
22 Oct 2019
2 min read
Save for later

Elm 0.19.1 releases with improved syntax error messages in the Elm compiler

Sugandha Lahoti
22 Oct 2019
2 min read
Yesterday Elm 0.19.1 was released with a new improvements to the compiler that now display syntactical errors in a new fashion that pointing users to their mistakes as well as suggesting them additional resources.  The goal of 0.19.1 is to clean up the rough edges such that we have a really solid foundation for newcomers, professionals, scientists. Evan Czaplicki, the creator of these improved error messages explains in a blog post, “the new error messages points to the spot where it got stuck, but more importantly, it tries to help by (1) giving examples and (2) linking to a page that explains how imports work. It tries to help you learn!” For example, if you miss out on including a curly braces in your JavaScript code, the error message shows up at the very end of the file. Now with the improved Elm compiler, the error message will point out the error where it occurred while also suggesting a viable fix. Source: Elm’s blog Czaplicki hopes that this project can eliminate the survivorship bias present in the programming ecosystem, which he believes is also one of the reasons why it took so long for Elm to prioritize this project. “I hope this work on syntax error messages will help make Elm more friendly and accessible, and I hope it will help make space for other language designers to prioritize this kind of project!”, he adds. Other improvements in Elm 0.19.1 Faster compilation, especially for incremental compiles Uses filelocks so that cached files are not corrupted when plugins run elm make multiple times on the same project at the same time. More intuitive multiline declarations in REPL Various bug fixes (e.g. --debug, x /= 0, type Height = Height Float in --optimize) Developers on Twitter appreciated Elm’s focus on errors. https://twitter.com/vronnie911/status/1186377031015157762 https://twitter.com/domenkozar/status/1163795173022806016 Read more about improved error messages in Elm 0.19.1 on this blog post. What to expect from D programming language in the near future OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP improvements and more. Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release.
Read more
  • 0
  • 0
  • 3402

article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 3487

article-image-openbsd-6-6-comes-with-gcc-disabled-in-base-for-armv7-and-i386-smp-improvements-and-more
Bhagyashree R
18 Oct 2019
3 min read
Save for later

OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more

Bhagyashree R
18 Oct 2019
3 min read
Yesterday, the team behind OpenBSD, a Unix-like operating system, announced the release of OpenBSD 6.6. This release has GNU Compiler Collection (GCC) disabled in its base packages for i386 and ARMv7 and expanded LLVM Clang platform support. OpenBSD 6.6 also features various SMP improvements, improved Linux compatibility with ACPI interfaces, a number of new hardware drivers, and more. It ships with OpenSSH 8.1, LibreSSL 3.0.2, OpenSMTPD 6.6, and other updated packages. Read also: OpenSSH code gets an update to protect against side-channel attacks Key updates in OpenBSD 6.6 Unlocked system calls OpenBSD 6.6 comes with unlocked ‘getrlimit’ and ‘setrlimit’ system calls. These are used for controlling the maximum system resource consumption. There are also unlocked read and write system calls for reading input and writing output respectively. Improved hardware support OpenBSD 6.6 comes with Linux compatible ACPI interfaces. Also, the ACPI support is enabled in ‘radeon’ and ‘amdgpu’. Time Stamp Counter (TSC) is re-enabled as the default AMD64 time source and TSC synchronization is added for multiprocessor machines. This release supports the cryptographic coprocessor found on newer AMD Ryzen CPUs/APUs. IEEE 802.11 wireless stack improvements The ifconfig ‘nwflag’ is now repaired. A new stayauth ‘nwflag’ is added, which you can set to ignore deauth frames to prevent your system from a spoofing attack. Support for 802.11n Tx aggregation is added to net80211 and the ‘iwn’ driver. Starting with OpenBSD 6.6, all wireless drives submit a batch of received packets to the network stack during one interrupt, instead of submitting them individually. Security improvements The unveil command is updated to improve application behavior when encountering hidden filesystem paths. OpenBSD 6.6 has improved mitigations against a number of vulnerabilities including Spectre side-channel vulnerability in Intel CPUs and Intel's Microarchitectural Data Sampling vulnerability. This release introduces 'malloc_conceal' and 'calloc_conceal', which return the memory in pages marked ‘MAP_CONCEAL’ and call ‘freezero’ on ‘free’. Read also: Seven new Spectre and Meltdown attacks found In a discussion on Hacker News, many users expressed their excitement. A user commented, “Just keeps getting better and better every release. I wish they would add an easy encryption option in the installer. You can enable full-disk encryption, but you have to mess with the bioctl settings, which potentially scares off new users.” A few users also had some doubt that why this release has U2F support and Bluetooth disabled for security. A user explained, “I'm not sure why U2F would be "disabled for security". I guess it's just that nobody has implemented all the required things. For the USB tokens, you need userspace USB HID access and hotplug notifications. I did that in Firefox for FreeBSD.” These were some of the updates in OpenBSD 6.6. Check out the official announcement to know more. OpenBSD 6.4 released OpenSSH code gets an update to protect against side-channel attacks OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions  
Read more
  • 0
  • 0
  • 2933

article-image-developers-ask-for-an-option-to-disable-docker-compose-from-automatically-reading-the-env-file
Bhagyashree R
18 Oct 2019
3 min read
Save for later

Developers ask for an option to disable Docker Compose from automatically reading the .env file

Bhagyashree R
18 Oct 2019
3 min read
In June this year, Jonathan Chan, a software developer reported that Docker Compose automatically reads from .env. Since other systems also access the same file for parsing and processing variables, this was creating some confusion resulting in breaking compatibility with other .env utilities. Docker Compose has a "docker-compose.yml" config file used for deploying, combining, and configuring multiple multi-container Docker applications. The .env file is used for putting values in the "docker-compose.yml" file. In the .env file, the default environment variables are specified in the form of key-value pairs. “With the release of 1.24.0, the feature where Compose will no longer accept whitespace in variable names sourced from environment files. This matches the Docker CLI behavior. breaks compatibility with other .env utilities. Although my setup does not use the variables in .env for docker-compose, docker-compose now fails because the .env file does not meet docker-compose's format,” Chan explains. This is not the first time that this issue has been reported. Earlier this year, a user opened an issue on the GitHub repo. He described that after upgrading Compose to 1.24.0-rc1, its automatic parsing of .env file was failing. “I keep export statements in my .env file so I can easily source it in addition to using it as a standard .env. In previous versions of Compose, this worked fine and didn't give me any issues, however with this new update I instead get an error about spaces inside a value,” he explained in his report. As a solution, Chan has proposed, “I propose that you can specify an option to ignore the .env file or specify a different.env file (such as .docker.env) in the docker-compose.yml file so that we can work around projects that are already using the .env file for something else.” This sparked a discussion on Hacker News where users also suggested a few solutions. “This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker,” A user commented. Another user recommended, “You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into someplace and run docker-compose.” Some users also pointed out the lack of authentication mechanism in Docker Hub. “Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point. As an attacker, I would put a great deal of focus on attacking a company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high,” a user commented. Others recommended to set up a time-based one-time password (TOTP) instead. Check out the reported issue on the GitHub repository. Amazon EKS Windows Container Support is now generally available GKE Sandbox : A gVisor based feature to increase security and isolation in containers 6 signs you need containers  
Read more
  • 0
  • 0
  • 7156
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-ubuntu-19-10-releases-with-microk8s-add-ons-gnome-3-34-zfs-on-root-nvidia-specific-improvements-and-much-more
Vincy Davis
18 Oct 2019
4 min read
Save for later

Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more!

Vincy Davis
18 Oct 2019
4 min read
Yesterday, Canonical announced the release of Ubuntu 19.10 which is the fastest Ubuntu release with significant performance improvements to accelerate developer productivity in AI/ML. This release brings enhanced edge computing capabilities with the addition of strict confinement to MicroK8s, which will safeguard the complete isolation and presents a secured production-grade Kubernetes environment. This allows MicroK8s add-ons like Istio, Knative, CoreDNS, Prometheus, and Jaeger to be deployed securely at the edge with a single command. Ubuntu 19.10 also delivers other features like NVIDIA drivers which are embedded in the ISO image to improve the performance of gamers and AI/ML users. The CEO of Canonical, Mark Shuttleworth says, “With the 19.10 release, Ubuntu continues to deliver strong support, security and superior economics to enterprises, developers and the wider community.” The Ubuntu team has notified users that Ubuntu 19.10 will be only supported for 9 months, until July 2020. Users are also advised to use Ubuntu 18.04 for Long Term Support. What’s new in Ubuntu 19.10? Updated Packages Linux kernel: The new release is based on the Linux 5.3 series and will support AMD Navi GPUs, ARM SoCs, ARM Komeda display, and Intel speed select on Xeon servers. In order to improve the boot speed, the default kernel compression algorithm is moved to lz4 on most architectures. The default initramfs compression algorithm has also changed to lz4 on all architectures. Toolchain Upgrades: It also brings new upstream releases of glibc 2.30, OpenJDK 11, Rust 1.37, GCC 9.2, updated Python 3.7.5, Python 3.8.0 (interpreter only), ruby 2.5.5, php 7.3.8, perl 5.28.1, golang 1.12.10. Read More: Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Security Improvements Ubuntu 19.10 explores additional default hardening options that are enabled in GGC like support for both stack clash protection and control-flow integrity protection. Ubuntu Desktop GNOME 3.34 desktop: Ubuntu 19.10 includes GNOME 3.34 which includes a lot of bug fixes, some new features and a significant improvement in responsiveness and speed. It allows to group icons in the Activities overview, has improved wallpaper and wi-fi settings. ZFS on root: This feature is included as an experimental feature in this release. Users can create the ZFS file system and also partition the layout from the installer directly. Read More: Ubuntu 19.10 will now support experimental ZFS root file-system install option NVIDIA-specific improvements: The driver is now included in the ISO and has improved startup reliability when the NVIDIA driver is in use.  Ubuntu 19.10 also brings improved smoothness and frame rates for NVIDIA. Other new features in Ubuntu 19.10 A USB drive can be plugged in and accessed directly from the dock. New themes like Yaru light and dark variants are now available. Support for DLNA sharing is now available by default. Ubuntu Server Images: Ubuntu 19.10 prefers the production-ready ppc64el and arm64 live-server ISO images to install Ubuntu Server on bare metal on the two architectures. Raspberry Pi: The Raspberry Pi 32-bit and 64-bit preinstalled images (raspi3) are supported in this release. Also, Ubuntu images will now support almost all the devices of the Raspberry family Pi 2, Pi 3B, Pi 3B+, CM3, CM3+, Pi 4. Users have appreciated the new features in Ubuntu 19.10. https://twitter.com/dont39350/status/1184902506238926850 https://twitter.com/ImpWarfare/status/1184844081576456193 https://twitter.com/robinjuste/status/1183891524242857986 These are some of the selected updates in Ubuntu 19.10, read the release notes for more information. You can also check out the Ubuntu blog for more details on the release. Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices What to expect from D programming language in the near future Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more
Read more
  • 0
  • 0
  • 3039

article-image-swift-diagnostic-architecture-improvements-part-of-swift-5-2
Sugandha Lahoti
18 Oct 2019
3 min read
Save for later

Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release

Sugandha Lahoti
18 Oct 2019
3 min read
The team behind Swift shared a new blog post yesterday detailing their new diagnostic architecture which is being implemented as a part of the upcoming Swift 5.2 release (expected early November). This new architecture aims to improve the diagnostics from the compiler. It will make it easier to improve/port existing diagnostics to be used by new feature implementors. The Swift compiler uses type checker to ensure the correctness of a program. Type checker enforces rules about how types are used in source code and show when those rules are violated. However, it guesses the exact location of an error which is not helpful in certain scenarios because it is not specific or actionable. With the new diagnostic infrastructure, the team is trying to implement a new type checker that attempts to “fix” problems right at the point where they occur while remembering the fixes it has applied. This way, type checker can pinpoint errors in more kinds of programs. New constraint fix resolves inconsistent situations The type checker converts the source code into a constraint system, which represents relationships among the types in the code. The Constraint System first generates the constraint and then solves them. The process of constraint generation produces a constraint system that relates the types of various subexpressions within an expression. The constraint solver takes a given set of constraints and determines the most specific type binding for each of the type variables in the constraint system. To improve the constraint solving mechanism, the new diagnostic infrastructure employs a constraint fix that tries to resolve inconsistent situations where the solver gets stuck with no other types to attempt. This fix captures all useful information about the error location from the solver and uses that later for diagnostics. While the former approach guesses where the error is located, the new approach has a symbiotic relationship with the solver which provides all of the error locations to it. How constraint fix works A constraint fix is created whenever a constraint failure is detected. The constraint fix captures three crucial pieces of information about a failure: kind of failure that occurred, location in the source code where the failure came from, and types and declarations involved in the failure. The constraint solver then accumulates these fixes. Once it arrives at a solution, it looks at the fixes that were part of a solution and produces actionable errors or warnings. The Swift team has shared examples of improved diagnostics and SwiftUI examples to better demonstrate the workings of this new diagnostic infrastructure. People are quite excited about these error improvements in Swift and shared their views on the associated thread in Swift Forums. “Omg better error messages! That's so awesome! Right now error messages are the worst part of swift, I'm excited. I hope it will be the end of types called _ in error messages” “Awesome work @xedin! The blog post is highly informative and I thoroughly enjoyed reading it. I very much look forward to seeing the new diagnostic architecture in action.” Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases the native SwiftUI framework with declarative syntax, live editing, and support of Xcode and more. Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 2761

article-image-what-to-expect-from-d-programming-language-in-the-near-future
Fatema Patrawala
17 Oct 2019
3 min read
Save for later

What to expect from D programming language in the near future

Fatema Patrawala
17 Oct 2019
3 min read
On Tuesday, Atila Neves the Deputy leader for D programming language posted about his vision for D and what he would like to do with D lang in the near future. Make D programming language default for web dev and mobile applications D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Java, R). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary. One can write the production code in D and have libraries automatically making the code callable from other languages. Hence it will be easy to write D code that runs as fast or faster than the alternatives, and it will be a win on all fronts. Memory Safety for D lang Atila believes that D is a systems programming language with value types and pointers, it isn’t memory safe. He says that DIP1000 is in the right direction, but it still needs to be memory safe unless programmers opt-out via @trusted block or function. The DIP1000 proposal includes a scope mechanism that will know when the lifetime of a reference is over by providing a mechanism to guarantee that a reference cannot escape lexical scope. Thus it can safely implement memory management schemes rather than tracing the garbage collection. Safe and easy concurrency in D programming language As per Atila safe and easy concurrency in D is mostly achieved through actor models, but they still need to finalize shards and make everything @safe as well. Centralizing all reflection needs with an API Atila says instead of disparate ways of getting things done with fragmented APIs like (__traits, std.traits, custom code), he would like there to be a library that centralizes all reflection needs with a great API. Easy interoperability for C++ developers C++ has been successful so far in making the transition from C virtually seamless. Atila wants current C++ programmers with legacy codebases to just as easily be able to start writing D code. Faster development times D needs a fast interpreter so that developers can skip machine code generation and linking. This should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. String interpolation in D programming language Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. Hence, String interpolation would make it vastly easier to use. To know more about D programming language, check out the official post by Atila Neves. “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett The V programming language is now open source – is it too good to be true? Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 6539

article-image-net-framework-api-porting-project-concludes-with-net-core-3-0
Savia Lobo
17 Oct 2019
3 min read
Save for later

.NET Framework API Porting Project concludes with .NET Core 3.0

Savia Lobo
17 Oct 2019
3 min read
Two days ago, Immo Landwerth, a program manager on the .NET team announced that the community is concluding the .NET Framework API Porting project. They also plan to release more of the .NET Framework codebase under the MIT license on GitHub to allow the community to create OSS projects for technologies that will not be brought to .NET Core. For example, there already are community projects for CoreWF and CoreWCF. https://twitter.com/shanselman/status/1184214679779864576 In May, this year, Microsoft announced that the future of .NET will be based on .NET Core. At Build 2019, Scott Hunter stated that AppDomains, remoting, Web Forms, WCF server, and Windows Workflow won’t be ported to .NET Core. “With .NET Core 3.0, we’re at the point where we’ve ported all technologies that are required for modern workloads, be it desktop apps, mobile apps, console apps, web sites, or cloud services,” Landwerth writes. .NET Core 3.0 released last month includes added WPF and WinForms, which increased the number of .NET Framework APIs ported to .NET Core to over 120k, which is more than half of all .NET Framework APIs. Also, about 62K APIs to .NET Core that doesn’t exist in the .NET Framework. On comparing the total number of APIs, .NET Core has about 80% of the API surface of .NET Framework. “Since we generally no longer plan to bring existing technologies from .NET Framework to .NET Core we’ll be closing all issues that are labeled with port-to-core,” Landwerth further mentions. Post this announcement on GitHub, a lot of GitHub users enquired more about this change in the .NET community. A user asked, “Is CSharpCodeProvider ever going to happen in Core? The CodeDom API arrived somewhere in the 2.x timeframe, but it's a runtime fail even under 3.0 - are we going to need to re-write for some kind of explicitly Roslyn scripting?” To this, Landwerth replied, “CSharpCodeProvider is supported on .NET Core, but only the portions that deal with CodeDOM creation. You can't compile because this would require a compiler and the .NET Core runtime doesn't include a compiler (i.e. Roslyn). However, we might be able to do some gymnastics, like reflection, where the app can pull that in. Still tracked under #12180.” https://twitter.com/terrajobst/status/1183832593197744129 To know more about this announcement in detail, read the official GitHub post. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 2819
article-image-severity-issues-raised-for-python-2-debian-packages-for-not-supporting-python-3
Fatema Patrawala
16 Oct 2019
5 min read
Save for later

Severity issues raised for Python 2 Debian packages for not supporting Python 3

Fatema Patrawala
16 Oct 2019
5 min read
On Monday, Neil Williams a software developer from Linux CodeHelp raised severity issues for Python 2 leaf packages in Debian which do not support Python 3. Neil has urged Debian maintainers to remove Python 2 from all the Debian packages. He specifically mentions one of the packages, Calibre, an e-book management software which is completely open source and licensed under the GNU GPL v3. Calibre is written primarily in Python with some C/C++ code for speed and system interfacing. But it is not yet compatible with Python 3 as it requires at least Python 2.7.9. In 2017, an issue was raised on the Calibre platform by a user, “Python 2 is retiring in thirty months. Calibre needs to convert to Python 3.” Kovid Goyal, author of the Calibre platform responded saying, “No, it doesn't. I am perfectly capable of maintaining python 2 myself. Far less work than migrating the entire calibre codebase.” Now the latest Calibre version requires Python modules which are no longer available for Python 2. Gregor Riepl, a systems engineer in response to Neil says, “As of now, calibre is not of sufficient quality to be part of a Debian release and until it drops all Python2 requirements, it must be considered RC buggy.” This means that Calibre >= 4.0 for the foreseeable future will not be available in Debian. Calibre version 3.48 will be the last version that can run on Debian until the upstream Calibre switches to Python 3. Riepl further asked Neil if his quality argument is due to the Calibre authors resistance to migrate to Python 3. Neil responded, “No, it is based just on the removal of Python2 from Debian and avoiding special cases. Right now, any and every package in Debian testing which requires Python2 and has no Python3 alternative in Debian or ready for upload is of poor quality for no other reason than that. All such packages are of such poor quality that the package should be removed from testing - in an orderly manner, leaf packages first. That is in the best interests of all users, despite what may or may not happen to any particular subset(s) of users.  The decision flow is easy - if the answer in each case is "no", then move on to the next and if you get to the bottom, the bug should be RC. * Has the package already been removed from testing? * Is a Python3-only version already in Debian? * Is a Python3-only version available upstream? * Does the package have any reverse dependencies? * If you get here, it is already too late, there have already been   enough warnings. Upgrade the bug to RC and get the package   auto-removed from testing.” Neil said he was aware of the history of Calibre and understood what would happen if it is no longer a part of Debian. But that did not matter as removal of Python 2 is more important for the next Debian release. He also believes that Calibre has a relatively large user base that doesn't know much or care about the Python 2 deprecation. User will simply perceive dropping Calibre as a bad move on Debian's side and rush towards other packages of significantly lower quality. He further concluded, “Calibre is nothing special - it's a Python2 leaf package like vland and tftpy and any one of far too many others. Calibre can stay in unstable - it will go FTBFS, of course, but that isn't a problem either, IMHO. It's calibre's problem - not Debian's problem. There's always the option of users installing the old Python2 stuff from Buster to keep calibre hobbling along. Debian is the higher priority here. Calibre would be nice to have but it does not deserve to cause delays on anybody else's voluntary effort. No package has that right.” Community feels Python 2 will result in unmaintained runtime and libraries in packages On Hacker News, users are discussing how Python foundation is pushing in packages to migrate to Python 3 that will result in Python 2 having an entire set of unmaintained runtime and libraries in the package repository. One user comments, “Historically, Debian hasn't particularly objected to packaging obsolete versions of programming languages without upstream support. I doubt anyone is checking for potential security problems in Algol 68 and Fortran 77 implementations that Debian ships, and I don't think the people using those packages are particularly inconvenienced by that. It seems a shame that the social pressure to persuade people to port their code to Python 3 means that Debian is going to have weaker support for 10-year-old Python than 40-year-old Fortran. In particular, there are ongoing efforts to try to make it the normal thing for scientists to make the programs they ran on their data available so that their results can be reproduced; aggressively dropping older programming language implementations rather gets in the way of that.” Another user responded, “This isn't about "languages". It's about software! Algol 68 and Fortran 77 may have stale (but maintained) compilers or interpreters in the package repository. Starting very soon - Python 2 will have an entire set of unmaintained runtime and libraries in the package repository. You know - actual, officially, unmaintained software! Unmaintained software that other packages, including Calibre in this example, further build on. Of course they're throwing this out.” Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more Core Python team confirms sunsetting Python 2 on January 1, 2020 PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3
Read more
  • 0
  • 0
  • 2627

article-image-openais-ai-robot-hand-learns-to-solve-a-rubik-cube-using-reinforcement-learning-and-automatic-domain-randomization-adr
Savia Lobo
16 Oct 2019
5 min read
Save for later

OpenAI’s AI robot hand learns to solve a Rubik Cube using Reinforcement learning and Automatic Domain Randomization (ADR)

Savia Lobo
16 Oct 2019
5 min read
A team of OpenAI researchers shared their research of training neural networks to solve a Rubik’s Cube with a human-like robot hand. The researchers trained the neural networks only in simulation using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). In their research paper, the team demonstrates how the system trained only in simulation can handle situations it never saw during training. “Solving a Rubik’s Cube one-handed is a challenging task even for humans, and it takes children several years to gain the dexterity required to master it. Our robot still hasn’t perfected its technique though, as it solves the Rubik’s Cube 60% of the time (and only 20% of the time for a maximally difficult scramble),” the researchers mention on their official blog. The Neural networks were also trained with Kociemba’s algorithm along with RL algorithms, for picking the solution steps. Read Also: DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help What is Automatic Domain Randomization (ADR)? Domain randomization enables networks trained solely in simulation to transfer to a real robot. However, it was a challenge for the researchers to create an environment with real-world physics in the simulation environment. The team realized that it was difficult to measure factors like friction, elasticity, and dynamics for complex objects like Rubik’s Cubes or robotic hands and domain randomization alone was not enough. To overcome this, the OpenAI researchers developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation. In ADR, the neural network learns to solve the cube with a single, nonrandomized environment. As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically. This makes the task harder since the neural network must now learn to generalize to more randomized environments. The network keeps learning until it again exceeds the performance threshold, when more randomization kicks in, and the process is repeated. “The hypothesis behind ADR is that a memory-augmented network combined with a sufficiently randomized environment leads to emergent meta-learning, where the network implements a learning algorithm that allows itself to rapidly adapt its behavior to the environment it is deployed in,” the researchers state. Source: OpenAI.com OpenAI's AI-hand and the Giiker Cube The researchers have used the Shadow Dexterous E Series Hand (E3M5R) as a humanoid robot hand and the PhaseSpace motion capture system to track the Cartesian coordinates of all five fingertips. They have also used RGB Basler cameras for vision pose estimation. Sensing the state of a Rubik’s cube from vision alone is a challenging task. The team, therefore, used a “smart” Rubik’s cube with built-in sensors and a Bluetooth module as a stepping stone. They also used a Giiker cube for some of the experiments to test the control policy without compounding errors made by the vision model’s face angle predictions. The hardware is based on the Xiaomi Giiker cube. This cube is equipped with a Bluetooth module and allows one to sense the state of the Rubik’s cube. However, it is limited to a face angle resolution of 90◦ , which is not sufficient for state tracking purposes on the robot setup. The team, therefore, replaced some of the components of the original Giiker cube with custom ones in order to achieve a tracking accuracy of approximately 5 degrees. A few challenges faced OpenAI’s method currently solves the Rubik’s Cube 20% of the time when applying a maximally difficult scramble that requires 26 face rotations. For simpler scrambles that require 15 rotations to undo, the success rate is 60%. Researchers consider an attempt to have failed when the Rubik’s Cube is dropped or a timeout is reached. However, their network is capable of solving the Rubik’s Cube from any initial condition. So if the cube is dropped, it is possible to put it back into the hand and continue solving. The neural network is much more likely to fail during the first few face rotations and flips. The team says this happens because the neural network needs to balance solving the Rubik’s Cube with adapting to the physical world during those early rotations and flips. The team also implemented a few perturbations while training the AI-robot hand, including: Resetting the hidden state: During a trial, the hidden state of the policy was reset. This leaves the environment dynamics unchanged but requires the policy to re-learn them since its memory has been wiped. Re-sampling environment dynamics: This corresponds to an abrupt change of environment dynamics by resampling the parameters of all randomizations while leaving the simulation state18 and hidden state intact. Breaking a random joint: This corresponds to disabling a randomly sampled joint of the robot hand by preventing it from moving. This is a more nuanced experiment since the overall environment dynamics are the same but the way in which the robot can interact with the environment has changed. https://twitter.com/OpenAI/status/1184145789754335232 Here’s the complete video on how the AI-robot hand swiftly solved the Rubik cube single-handedly! https://www.youtube.com/watch?time_continue=84&v=x4O8pojMF0w To know more about this research in detail, you can read the research paper. Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block Build your first Reinforcement learning agent in Keras [Tutorial]
Read more
  • 0
  • 0
  • 2844

article-image-aws-will-be-sponsoring-the-rust-project
Savia Lobo
15 Oct 2019
3 min read
Save for later

AWS will be sponsoring the Rust Project

Savia Lobo
15 Oct 2019
3 min read
Yesterday, AWS announced that it is sponsoring the popular Rust programming language. Rust has seen a lot of developments in AWS as it is used for various performance-sensitive components in its popular services such as Lambda, EC2, and S3. Tech giants such as Google, Microsoft, and Mozilla also use Rust for writing and maintaining fast, reliable, and efficient code. Alex Crichton, Rust Core Team Member says, “We’re thrilled that AWS, which the Rust project has used for years, is helping to sponsor Rust’s infrastructure. This sponsorship enables Rust to sustainably host infrastructure on AWS to ship compiler artifacts, provide crates.io crate downloads, and house automation required to glue all our processes together. These services span a myriad of AWS offerings from CloudFront to EC2 to S3. Diversifying the sponsorship of the Rust project is also critical to its long-term success, and we’re excited that AWS is directly aiding this goal.” Why AWS chose Rust Rust project maintainers say the reason AWS chose Rust is due to its blazingly fast and memory-efficient performance; its rich type system and ownership model guarantee memory-safety and thread-safety; its great documentation, its friendly compiler with useful error messages, and top-notch tooling; and many other amazing features. Rust has also been voted as the “Most Loved Language” in Stack Overflow’s survey for the past four years. Rust also has an inclusive community along with top-notch libraries such as: Serde, for serializing and deserializing data. Rayon, for writing parallel & data race-free code. Tokio/async-std, for writing non-blocking, low-latency network services. tracing, for instrumenting Rust programs to collect structured, event-based diagnostic information. Rust too uses AWS services Rust project uses AWS services to: Store release artifacts such as compilers, libraries, tools, and source code on S3. Run ecosystem-wide regression tests with Crater on EC2. Operate docs.rs, a website that hosts documentation for all packages published to the central crates.io package registry. The AWS community is excited to include Rust in their community. It would be interesting to see the combination of Rust and AWS in their future implementations. Few users are confused about the nature of this sponsorship. A Redditor commented, “It's not clear exactly what this means - is it about how AWS provides S3/EC2 services for free to the Rust project already (which IIRC has been ongoing for some time), or is it an announcement of something new ($$$ or developer time being contributed?)?” To know about this announcement in detail, read AWS’ official blog post. Amazon announces improved VPC networking for AWS Lambda functions Reddit experienced an outage due to an issue in its hosting provider, AWS Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol
Read more
  • 0
  • 0
  • 3009
article-image-ionic-react-released-ionic-framework-pivots-from-angular-to-a-native-react-version
Sugandha Lahoti
15 Oct 2019
3 min read
Save for later

Ionic React released; Ionic Framework pivots from Angular to a native React version

Sugandha Lahoti
15 Oct 2019
3 min read
Yesterday the team behind the Ionic Framework announced the general availability of Ionic React, which is a native React version of Ionic Framework, pivoting from its traditional Angular-focused app framework. “Ionic React makes it easy to build apps for iOS, Android, Desktop, and the web as a Progressive Web App”, states the team in a blog post. It uses Typescript and combines core Ionic experience with the tooling and APIs that are tailored to developers. It is a fully-supported, enterprise-ready offering with services, advisory, tooling, and supported native functionality. @ionic/react projects will work like standard React projects, leveraging react-dom and with setup normally found in a Create React App (CRA) app. For routing and navigation, React Router is used under the hood. One difference is the usage of TypeScript, which provides a more productive experience. To use plain JavaScript, you can rename files to use a .js extension then remove any of the type annotations with each file. Explaining the reason behind choosing React, the team says, “With Ionic, we envisioned being able to build rich JavaScript-powered controls and distribute them as simple HTML tags any web developer could assemble into an awesome app. We realized that building a version of the Ionic Framework for React made perfect sense. Combined with the fact that we had several React fans join the Ionic team over the years, there was a strong desire internally to see Ionic Framework support React as well.” How is Ionic React different from React Native The team realized that there was a gap in the React ecosystem that Ionic could fill as an easier mobile and Progressive Web App development solution. Developers were also interested in incorporating it in their existing React Native apps, by building more screens in their app out of a native WebView frame. There were two major reasons why the Ionic team built @ionic/react. First, it is DOM-native and uses the standard react-dom library. In contrast, React Native builds an abstraction on top of iOS and Android native UI controls. The team states, “When we looked at installs for react-dom compared to react-native, it was clear to us that vastly more React development was happening in the browser and on top of the DOM than on top of the native iOS or Android UI systems” Secondly, Ionic is one of the most popular frameworks for building PWA, most notably the Stencil project. React Native, on the other hand, does not officially support Progressive web apps. PWAs are, at best, an afterthought in the React Native ecosystem. @ionic/react has been well appreciated by developers on Twitter. https://twitter.com/dipakcreation/status/1183974237125693441 https://twitter.com/MichaelW_PWC/status/1183836080170323968 https://twitter.com/planetoftheweb/status/1183809368934043653 You can go through Ionic’s blog for additional information and for getting started. Ionic React RC is now out! Ionic 4.1 named Hydrogen is out! React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 6291

article-image-after-backlash-for-rejecting-a-ublock-origin-update-from-the-chrome-web-store-google-accepts-ad-blocking-extension
Bhagyashree R
15 Oct 2019
6 min read
Save for later

After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension

Bhagyashree R
15 Oct 2019
6 min read
Last week, Raymond Hill, the developer behind uBlock Origin shared that the extension’s dev build 1.22.5rc1 was rejected by Google's Chrome Web Store (CWS). uBlock Origin is a free and open-source browser extension widely used for content-filtering and adblocking.  Google stated that the extension did not comply with its extension standards as it bundles up different purposes into a single extension. An email to Hill from Google reads, “Do not create an extension that requires users to accept bundles of unrelated functionality, such as an email notifier and a news headline aggregator.” Hill on a GitHub issue mentioned that this is basically “stonewalling” and in the future, users may have to switch to another browser to use uBlock Origin. He does plans to upload the stable version. He commented, “I will upload stable to the Chrome Web Store, but given 1.22.5rc2 is rejected, logic dictates that 1.23.0 will be rejected. Actually, logic dictates that 1.22.5rc0 should also be rejected and yet it's still available in the Chrome Web Store.” Users’ reaction on Google rejecting the uBlock Origin dev build This news sparked a discussion on Hacker News and Reddit. Users speculated that probably this outcome is the result of the “crippling” update Google has introduced in Chrome (beta and dev versions currently): deprecating the blocking ability of the WebRequest API. The webRequest API permits extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observe network requests will be allowed.  In place of webRequest API, Google has introduced the declarativeNetRequest API. This API allows adding up to 30,000 rules, 5000 dynamic rules, and 100 pages. Due to its limiting nature, many ad blocker developers and maintainers have expressed that this API will impact the capabilities of modern content blocking extensions. Google’s reasoning for introducing this change is that this API is much more performant and provides better privacy guarantees. However, many developers think otherwise. Hill had previously shared his thoughts on deprecating the blocking ability of the webRequest API.  “Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well-crafted extensions. Furthermore, if performance concerns due to the blocking nature of the webRequest API was their real motive, they would just adopt Firefox's approach and give the ability to return a Promise on just the three methods which can be used in a blocking manner.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. While some others were in support of Google saying that Hill could have moved some of the functionalities to a separate extension. “It's an older rule. It does technically apply here, but it's not a great look that they're only enforcing it now. If Gorhill needed to, some of that extra functionality could be moved out into a separate extension. uBlock has done this before with uBlock Origin Extra. Most of the extra features (eg. remote font blocking) aren't a huge deal, in my opinion.” How Google reacted to the public outcry Simeon Vincent, a developer advocate for Chrome extensions commented on a Reddit discussion that the updated extension was approved and published on the Chrome Web Store.  “This morning I heard from the review team; they've approved the current draft so next publish should go through. Unfortunately it's the weekend, so most folks are out, but I'm planning to follow up with u/gorhill4 with more details once I have them. EDIT: uBlock Origin development build was just successfully published. The latest version on the web store is 1.22.5.102.” He also further said that this whole confusion was because of a “clunkier” developer communication process. When users asked him about the Manifest V3 change he shared, “We've made progress on better supporting ad blockers and content blockers in general in Manifest V3. We've added rule modification at runtime, bumped the rule limits, added redirect support, header modification, etc. And more improvements are on the way.” He further added, “But Gorhill's core objection is to removing the blocking version of webRequest. We're trying to move the extension platform in a direction that's more respectful of end-user privacy, more secure, and less likely to accidentally expose data – things webRequest simply wasn't designed to do.” Chrome ignores the autocomplete=off property In other Chrome related news, it was reported that Chrome continues to autofill forms even if you disable it using the autocomplete=off property. A user commented, “I've had to write enhancements for Web apps several times this year with fields which are intended to be filled by the user with information *about other users*. Not respecting autocomplete="off" is a major oversight which has caused a lot of headache for those enhancements.” Chrome decides on which field should be filled with what data based on a combination of form and field signatures. If these do not match, the browser will resort to only checking the field signatures.  A developer from the Google Chrome team shared, “This causes some problems, e.g. in input type="text" name="name", the "name" can refer to different concepts (a person's name or the name of a spare part).” To solve this problem the team is working on an experimental feature that gives users the choice to “(permanently) hide the autofill suggestions.”  Check out the reported issue to know more in detail. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end GitHub updates to Rails 6.0 with an incremental approach React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!
Read more
  • 0
  • 0
  • 13738