Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-llvms-clang-9-0-to-ship-with-experimental-support-for-opencl-c17-asm-goto-initial-support-and-more
Bhagyashree R
17 Sep 2019
2 min read
Save for later

LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support, and more

Bhagyashree R
17 Sep 2019
2 min read
The stable release of LLVM 9.0 is expected to come in the next few weeks along with subprojects like Clang 9.0. As per the release notes, the upcoming Clang 9.0 release will come with experimental support for C++17 features in OpenCL, asm goto support, and much more. Read also: LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more What’s new coming in Clang 9.0.0 Experimental support for C++17 features in OpenCL Clang 9.0.0 will have experimental support for C++17 features in OpenCL. The experimental support includes improved address space behavior in the majority of C++ features. There is support for OpenCL-specific types such as images, samplers, events, and pipes. Also, the invoking of global constructors from the host side is possible using a specific, compiler-generated kernel. C language updates in Clang Clang 9.0.0 includes the __FILE_NAME__ macro as a Clang specific extension that is supported in all C-family languages. It is very similar to the __FILE__ macro except that it will always provide the last path component when possible. Another C language-specific update is the initial support for asm goto statements to control flow from inline assembly to labels. This construct will be mainly used by the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Building Linux kernels with Clang 9.0 With the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The team adds, “The Android and ChromeOS Linux distributions have moved to building their Linux kernels with Clang, and Google is currently testing Clang built kernels for their production Linux kernels.” Read also: Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel Build system changes Previously, the install-clang-headers target used to install clang’s resource directory headers. With Clang 9.0, this installation will be done by the install-clang-resource-headers target. “Users of the old install-clang-headers target should switch to the new install-clang-resource-headers target. The install-clang-headers target now installs clang’s API headers (corresponding to its libraries), which is consistent with the install-llvm-headers target,” the release notes read. To know what else is coming in Clang 9.0, check out its official release notes. Other news in Programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices
Read more
  • 0
  • 0
  • 2630

article-image-oracle-releases-jdk-13-with-switch-expressions-and-text-blocks-preview-features-and-more
Bhagyashree R
17 Sep 2019
3 min read
Save for later

Oracle releases JDK 13 with switch expressions and text blocks preview features, and more!

Bhagyashree R
17 Sep 2019
3 min read
Yesterday, Oracle announced the general availability of Java SE 13 (JDK 13) and that its binaries are expected to be available for download today. In addition to improved performance, stability, and security, this release comes with two preview features, switch expressions and text blocks. This announcement coincides with the commencement of Oracle’s co-located OpenWorld and Code One conferences happening from September 16-17 2019 at San Francisco. Oracle’s director of Java SE Product Management, Sharat Chander, wrote in the announcement, “Oracle offers Java 13 for enterprises and developers. JDK 13 will receive a minimum of two updates, per the Oracle CPU schedule, before being followed by Oracle JDK 14, which is due out in March 2020, with early access builds already available.” This release is licensed under the GNU General Public License v2 with the Classpath Exception (GPLv2+CPE). For those who are using Oracle JDK release as part of an Oracle product or service, it is available under a commercial license. Read also: Oracle releases open-source and commercial licenses for Java 11 and later What’s new in JDK 13 JDK 13 includes the implementation of the following Java Enhancement Proposals (JEPs): Dynamic class-data sharing archives (JEP 350) JEP 350 improves the usability of application class-data sharing to allow the dynamic archiving of classes once the execution of a Java application is completed. The archived classes will consist of all loaded application classes and library classes that are not present in the default, base-layer CDS archive. Uncommit unused memory (JEP 351) Previously, the z garbage collector did not uncommit and returned memory to the operating system, even if it was left unused for a long time. With JEP 351 implemented in JDK 13, the z garbage collector will return unused heap memory to the operating system. Read also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Reimplement the Legacy Socket API (JEP 353) In JDK 13, the underlying implementation used by the ‘java.net.Socket’ and ‘java.net.ServerSocket APIs’ is replaced by “a simpler and more modern implementation that is easy to maintain and debug,” as per JEP 353. This new implementation aims to make adapting to user-mode threads or fibers, that is currently being explored in Project Loom, much easier. Switch expressions preview (JEP 354) The switch expressions feature proposed in JEP 354 allows using ‘switch’ as both a statement or an expression. Developers will now be able to use both the traditional ‘case ... : labels’ (with fall through) or new ‘case ... -> labels’ (with no fall through). This preview feature in JDK 13 aims to simplify everyday coding and prepare the way for the use of pattern matching (JEP 305) in a switch. Text blocks preview (JEP 355) The text blocks preview feature proposed in JEP 355 makes it easy to express strings that take up several source code lines. This preview feature aims to improve both “the readability and the writeability of a broad class of Java programs to have a linguistic mechanism for denoting strings more literally than a string literal.” Check out the official announcement by Oracle to know what else has landed in JDK 13. Other news in programming Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?
Read more
  • 0
  • 0
  • 2526

article-image-darklang-available-in-private-beta
Fatema Patrawala
17 Sep 2019
4 min read
Save for later

Darklang available in private beta

Fatema Patrawala
17 Sep 2019
4 min read
Yesterday, the team behind Dark programming language has unveiled Darklang’s private beta version. Dark is a holistic programming language, editor, and infrastructure for building backends. Developers can write in the Dark language, using the Dark editor, and the program is hosted on Dark’s infrastructure. As a result, they can code without thinking about infrastructure, and have safe instant deployment, which the team is calling “deployless” development. According to the team, backends today are too complicated to build and they have designed Dark in a way to reduce that complexity. Ellen Chisa, CEO of the Dark says, “Today we’re releasing two videos showing how Dark works. And demonstrate how to build a backend application (an office sign-in app) in 10 minutes.” Paul Biggar, the CTO also talks about the Dark’s philosophy and the details of the language, the editor and the infrastructure. He also shows how they make “deployless” safe with feature flags and versioning, and how Dark allows to introspect and debug live requests. Alpha users of Darklang build backends for web and mobile applications The Dark team says that during the private alpha, developers have built entire backends in Dark. Chase Olivieri built Altitude, a flight deal subscription site. Julius Tarng moved the backend of Tokimeki Unfollow to Dark for scalability. Jessica Greenwalt & Pixelkeet ported Birb, their internal project tracker, into a SaaS for other design studios to use. The team has also seem alpha users build backends for web and mobile applications, internal tools, Slackbots, Alexa skills, and personal projects. And they’ve even started building parts of Dark in Dark, including their presence service and large parts of the signup flow. Additionally, the team will let you in the private beta of Darklang immediately if the developers have their project well-scoped and ready to get started. Community unhappy with private version, and expect open-source On Hacker News, users are discussing that in this time and age if there is any new programming language, it has to be open-source. One of them commented, “Is there an open source version of the language? ...bc I'm not touching a programming language with a ten foot pole if it hasn't got at least two implementations, and at least one open source :| Sure, keep the IDEs and deployless infrastructure and all proprietary, but a core programming language in 2019 can only be open-source. Heck, even Microsoft gets it now.” Another one says, “They are 'allowing' people into a private beta of a programming language? Coupled with the fact it is not open source and has a bunch of fad ad-tech videos on the front page this is so many red flags.” While others compare Dark with different programming languages, mainly Apex, Rust and Go. A user comment reads, “I see a lot of Parse comparisons, but for me this is way more like Force.com from Salesforce and the Apex language. Proprietary language (Apex, which is Java 6-ish), complete vertical integration, no open source spec or implementation.” Another one says, “Go - OK, it has one implementation (open-source), but it's backed by one big player (Google) and used by many others... also the simplicity at core design decisions sound like the kind of choices that would make an alternative compiler easier to implement than for other languages Rust - pretty fast growing open-source community despite only one implementation... but yeah I'm sort of worried that Rust is a "hard to implement" kind of language with maybe a not high enough bus factor... similar worries for Julia too But tbh I'm not drawn much to either Go and Rust for other reasons - Go is too verbose for my taste, no way to write denser code that highlights the logic instead of the plumbing, and it has a "dumb" type system, Rust seems a really bad choice for rapid prototyping and iteration which is what I care about now.” Other interesting news in programming this week Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others TextMate 2.0, the text editor for macOS releases GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers
Read more
  • 0
  • 0
  • 2530

article-image-the-house-judiciary-antitrust-subcommittee-asks-amazon-facebook-alphabet-and-apple-for-details-including-private-emails-in-the-wake-of-antitrust-investigations
Bhagyashree R
17 Sep 2019
4 min read
Save for later

The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations

Bhagyashree R
17 Sep 2019
4 min read
On Friday last week, the House Judiciary Antitrust Subcommittee sent out four separate requests for information letters to Amazon, Facebook, Alphabet, and Apple as a part of antitrust investigations into the tech giants. The companies are expected to respond by October 14th. The antitrust investigation was launched earlier this year to determine whether big tech is abusing its market dominance and violating antitrust law. Stating the reason behind this investigation, Judiciary Chairman Jerrold Nadler said in a statement, “The open internet has delivered enormous benefits to Americans, including a surge of economic opportunity, massive investment, and new pathways for education online. But there is growing evidence that a handful of gatekeepers have come to capture control over key arteries of online commerce, content, and communications.” The House Judiciary Antitrust Subcommittee asks the big tech for a broad range of documents The letters issued by the antitrust subcommittee to these big tech companies ask them to share company organization charts, financial reports, and records they’ve produced for earlier antitrust investigation by the FTC or Department of Justice. Along with these details, the letters also ask a wide-range of questions specific to the individual companies. The letter to Amazon demands details about any provision it takes to guarantee that its prices are best in its contracts with suppliers or merchants. As there have been speculations that Amazon tweaks its search algorithm in favor of its own products, the letter asks detailed questions regarding its ranking and search algorithms. https://twitter.com/superglaze/status/1173861273014022144 In the letter, there are questions regarding the promotion and marketing services Amazon provides to suppliers or merchants and whether it treats its own products differently from third-party products. Congress has also asked about Amazon’s acquisition across medicine, home security, and grocery stores. The letter to Facebook asks details about its Onavo app that was reported to have been used for monitoring users’ mobile activity. It asks Facebook to present details of all the product decisions and acquisitions Facebook made based on the data collected by Onavo. The letter also focuses on how Facebook plans to keep all the promises it made when acquiring WhatsApp in 2014 like “We are absolutely not going to change plans around WhatsApp and the way it uses user data.” In the letter addressed to Alphabet, the antitrust subcommittee has asked detailed questions regarding the algorithm behind Google Search. The committee has also demanded executive emails discussing Google’s acquisitions including DoubleClick, YouTube, and Android. There are also several questions touching upon Google Maps Platform, Google Adsense and AdX, Play Store, YouTube’s ad inventory, and much more. In the letter to Apple, the antitrust subcommittee has asked whether Apple restricts its users from using web browsers other than Safari. It has asked for emails about its crackdown on screen-tracking and parental control apps. Also, there are questions regarding Apple’s restrictions on third-party repairs. The letter reads, “Isn’t this just a way for Apple to elbow out the competition and extend its monopoly into the market for repairs?” Read also: Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill? Rep. David N. Cicilline, chairman of the panel’s antitrust subcommittee, believes that the requests for information mark an “important milestone in this investigation.” In a statement, he said, “We expect stakeholders to use this opportunity to provide information to the Committee to ensure that the Internet is an engine for opportunity for everyone, not just a select few gatekeepers.” This step by the antitrust subcommittee adds to the antitrust pressure on Silicon Valley. Last week, more than 40 state attorney generals launched an antitrust investigation targeting Google and its advertising practices. Meanwhile, Facebook is also facing a multistate investigation for possible antitrust violations. Other news in data Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case ‘Hire by Google’, the next product killed by Google; services to end in 2020 Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking
Read more
  • 0
  • 0
  • 1646

article-image-100-million-grant-for-the-web-web-monetization-mozilla-coil-creative-commons
Sugandha Lahoti
17 Sep 2019
3 min read
Save for later

$100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons

Sugandha Lahoti
17 Sep 2019
3 min read
Coil, Mozilla and Creative Commons are launching a major $100 million ‘Grant for the Web’ to award people who help develop best web monetization practices. The Grant will give roughly $20 million per year for five years to content sites, open-source infrastructure developers, and independent creators that contribute to a ‘privacy-centric, open, and accessible web monetization ecosystem’. This is a great initiative to move the workings of the internet from an ad-focused business model to a new privacy-focused internet. Grant for the Web is primarily funded by Coil a content-monetization company, with Mozilla and Creative Commons as founding collaborators. Coil is known for developing Interledger and Web Monetization as the first comprehensive set of open standards for monetizing content on the Web. Web Monetization allows users to reward creators on the Web without having to rely on one particular company, currency, or payment platform. Read Also:  Mozilla announces a subscription-based service for providing ad-free content to users Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Coil cited a number of issues in the internet domain such as privacy abuses related to ads, demonetization to appease advertisers, unethical sponsored content, large platforms abusing their market power. “All of these issues can be traced back to one simple problem,” says Coil, “browsers don’t pay”. This forces sites to raise funds through workarounds like ads, data trafficking, sponsored content, and site-by-site subscriptions. In order to demote these activities, Coil will now grant money to people interested in experimenting with Web Monetization as a more user-friendly, privacy-preserving way to make money. Award amounts will vary from small to large ($1,000-$100,000), depending on the scope of the project. The majority of the grant money (at least 50%) will go to openly-licensed software and content. Special focus will be given to people who promote diversity and inclusion on the internet, and for communities and individuals that have historically been marginalized, disadvantaged, or without access. Awardees will be approved by an Advisory Council initially made up of representatives from Coil, Mozilla, and Creative Commons. “The business models of the web are broken and toxic, and we need to identify new ways to support creators and to reward creativity,” says Ryan Merkley, CEO of Creative Commons, in a statement. “Creative Commons is unlikely to invent these solutions on its own, but we can partner with good community actors who want to build things that are in line with our values. Mark Surman, Mozilla’s executive director said, “In the current web ecosystem, big platforms and invasive, targeted advertising make the rules and the profit. Consumers lose out, too — they unwittingly relinquish reams of personal data when browsing content. That’s the whole idea behind ‘surveillance capitalism.’ Our goal in joining Grant for the Web is to support a new vision of the future. One where creators and consumers can thrive.” Coil CEO, Stefan Thomas is aware of the hurdles. "The grant is structured to run over five years because we think that's enough time to get to a tipping point where this either becomes a viable ecosystem or not," he said. "If it does happen, one of the nice things about this ecosystem is that it tends to attract more momentum." Check out grantfortheweb.org and join the Community Forum to ask questions and learn more. Next up in Privacy Google open sources their differential privacy library to help protect user’s private data Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals. How Data Privacy awareness is changing how companies do business
Read more
  • 0
  • 0
  • 1926

article-image-a-recap-of-the-linux-plumbers-conference-2019
Vincy Davis
17 Sep 2019
4 min read
Save for later

A recap of the Linux Plumbers Conference 2019

Vincy Davis
17 Sep 2019
4 min read
This year’s Linux Plumbers Conference concluded on the 11th of September 2019. This invitation-only conference for Linux top kernel developers was held in Lisbon, Portugal this year. The conference brings developers working on the plumbing of Linux - kernel subsystems, core libraries, windowing systems, etc. to think about core design problems. Unlike most tech conferences that generally discuss the future of the Linux operating system, the Linux Plumbers Conference has a distinct motive behind it. In an interview with ZDNet, Linus Torvalds, the Linux creator said, “The maintainer summit is really different because it doesn't even talk about technical issues. It's all about the process of creating and maintaining the Linux kernel.” In short, the developers attending the conference know confidential and intimate details about some of the Linux kernel subsystems, and maybe this is why the conference has the word ‘Plumbers’ in it. Read Also: Introducing kdevops, a modern DevOps framework for Linux kernel development The conference is divided into several working sessions focusing on different plumbing topics. This year the Linux Plumbers Conference had over 18 microconferences, with topics like RISC-V, tracing, distribution kernels, live patching, open printing, toolchains, testing and fuzzing, and more. Some Micro conferences covered in Linux Plumbers Conference 2019 The Linux Plumbers 2019 RISC-V MC (microconference) focussed on finding the solutions for changing the kernel. In the long run, this discussion of changing the kernel is expected to result in active developer participation for code review/patch submissions for a better and more stable kernel for RISC-V. Some of the topics covered in RISC-V MC included RISC-V platform specification progress and fixing the Linux boot process in RISC-V. The Plumbers Live Patching MC had an open discussion for all the involved stakeholders to discuss the live patching related issues such that it will help in making the live patching of the Linux kernel and the Linux userspace live patching feature complete. This open discussion has been a success in past conferences as it leads to useful output which helps in pushing the development of the live patching forward. Some of the topics included all the happenings in kernel live in the last one year, API for state changes made by callbacks and source-based livepatch creation tooling. The System Boot and Security MC concentrated on open source security, including bootloaders, firmware, BMCs and TPMs. The potential speakers and key participants for the MC had everybody interested in GRUB, iPXE, coreboot, LinuxBoot, SeaBIOS, UEFI, OVMF, TianoCore, IPMI, OpenBMC, TPM, and other related projects and technologies. The main goal of this year’s Remote Direct Memory Access (RDMA) MC was to resolve the open issues in RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics, RDMA and DAX, contiguous system memory allocations for userspace which is unresolved from 2017 and many more. Other areas of interest included multi-vendor virtualized 'virtio' RDMA, non-standard driver features and their impact on the design of the subsystem, and more. Read Also: Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range Linux developers who attended the Plumbers 2019 conference were appreciative of the conference and took to Twitter to share their experiences. https://twitter.com/russelldotcc/status/1172193214272606209 https://twitter.com/odeke_et/status/1173108722744225792 https://twitter.com/jwboyer19/status/1171351233149448193 The videos of the conference are not out yet. The team behind the conference has tweeted that they will be uploading them soon. Keep checking this space for more details about the Linux Plumbers Conference 2019. Meanwhile, you can check out last year’s talks on YouTube. Latest news in Linux Lilocked ransomware (Lilu) affects thousands of Linux-based servers Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 2778
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-google-announces-two-new-attribute-links-sponsored-and-ugc-and-updates-nofollow
Amrata Joshi
16 Sep 2019
5 min read
Save for later

Google announces two new attribute links, Sponsored and UGC and updates “nofollow”

Amrata Joshi
16 Sep 2019
5 min read
Last year, the team at Google announced two new link attributes that provide webmasters with additional ways to Google Search the nature of particular links. The team is also evolving the nofollow attribute to identify the nature of links. How are the new attribute links useful? rel="sponsored" The sponsored attribute is used to identify links on the site that were created as part of sponsorships, advertisements, or other compensation agreements. rel="ugc" The UGC (User Generated Content) attribute value is used for the links within user-generated content, such as forum posts and comments. rel="nofollow" Bloggers usually try to improve their websites' search engine rankings by posting comments like "Visit my discount pharmaceuticals site” on other blogs, these are known as comment spam. Google took steps to solve this issue of comment spam by introducing the nofollow attribute in 2005 for flagging advertising-related or sponsored links. So when Google sees the attribute (rel="nofollow") on hyperlinks, it doesn’t give any credits to them that is used for ranking websites in the search results. This attribute was introduced so that spammers don’t get any benefit from abusing public areas like blog comments, referrer lists, trackbacks, etc. The nofollow attribute was originally used for combatting blog comment spam. It has now been evolved and used for combatting advertising links and user-generated links that aren’t reliable. It is now also used for cases where webmasters want to link to a page but they don’t want to imply any type of endorsement. The nofollow link attribute will be used as a hint for crawling and indexing purposes by March 1, 2020.  Web analysis will be easier with these attributes All of the above attributes will help in processing the links for better analysis of the web. As they are now treated as hints that can be used to identify which links need to be considered and which ones need to be excluded within Search. It is important to identify the links as they contain valuable information that can be used to improve search and can help in understanding as to how the words within these links describe the content they point at. These links can also be used to understand the unnatural linking patterns. The official post reads, “The link attributes of “ugc” and “nofollow” will continue to be a further deterrent. In most cases, the move to a hint model won’t change the nature of how we treat such links. We’ll generally treat them as we did with nofollow before and not consider them for ranking purposes. We will still continue to carefully assess how to use links within Search, just as we always have and as we’ve had to do for situations where no attributions were provided.” How will this affect publishers and SEO experts? The links that were arbitrarily nofollowed might now get counted as per the new update so it might encourage the spammers and hence an increase in link spam. Also, if these nofollowed links get counted, a lot of sites would simply start implementing a nofollow link policy and Google might count those links and that would impact the rankings. For instance, if a website uses a lot of Wikipedia links and if Google counts them, its ranking might improve. SEO experts will now have to look into what link attributes need to be applied to a specific link and work on their strategies and CMS (Content Management Systems) based on the new change. https://twitter.com/AlanBleiweiss/status/1171475313114533891?s=20 Most of the users on HackerNews seems to be sceptical about these new link attributes, according to them it won’t benefit them. A user commented on HackerNews, “I run large forums and mark my links "nofollow". I see no reason or benefit to me to change to or add "ugc". It's not clear that there's any benefits for me. And it's vague enough that I don't know that there are not downsides. Seems best to do nothing.” Few others think that the puporse of nofollow attribute has changed. Another user commented, “This means the meaning of 'nofollow' is changing? That seems a horrible idea. Previously 'nofollow' meant exactly that - "don't follow this link please googlebot", now it will mean "follow this link, but don't grant my site ranking onto the destination." - Thats a VERY different use case, I can't see all the millions of existing 'nofollow' tags being changed by site owners to any of these new tags. Surely a 'nogrant' or somesuch would be a better option, and leave 'nofollow' alone.” Danny Sullivan, Google’s SearchLiaison, responded to the criticism around the newly updated nofollow attribute: https://twitter.com/dannysullivan/status/1171488611918696449 To know more about this news, check out the official post. Other interesting news in web development GitHub updates to Rails 6.0 with an incremental approach 5 pitfalls of React Hooks you should avoid – Kent C. Dodds The Tor Project on browser fingerprinting and how it is taking a stand against it
Read more
  • 0
  • 0
  • 2297

article-image-istio-1-3-releases-with-traffic-management-improved-security-and-more
Amrata Joshi
16 Sep 2019
3 min read
Save for later

Istio 1.3 releases with traffic management, improved security, and more!

Amrata Joshi
16 Sep 2019
3 min read
Last week, the team behind Istio, an open-source service mesh platform, announced Istio 1.3. This release makes using the service mesh platform easier for users. What’s new in Istio 1.3? Traffic management In this release, automatic determination of HTTP or TCP has been added for outbound traffic when ports are not correctly named as per Istio’s conventions. The team has added a mode to the Gateway API that is used for mutual TLS operation. Envoy proxy has been improved,  it now checks Envoy’s readiness status. The team has improved the load balancing for directing the traffic to the same region and zone by default. And the Redis load balancer has now defaulted to MAGLEV while using the Redis proxy. Improved security This release comes with trust domain validation for services that use mutual TLS. By default, the server only authenticates the requests from the same trust domain. The team has added SDS (Software Defined Security) support for delivering the private key and certificates to each of the Istio control plane services. The team implemented major security policies including RBAC, directly into Envoy.  Experimental telemetry  In this release, the team has improved the Istio proxy to emit HTTP metrics directly to Prometheus, without the need of istio-telemetry service.  Handles inbound traffic securely Istio 1.3 secures and handles all inbound traffic on any port without the need of containerPort declarations. The team has eliminated the infinite loops that are caused in the IP tables rules when workload instances send traffic to themselves. Enhanced EnvoyFilter API The team has enhanced the EnvoyFilter API so that users can fully customize HTTP/TCP listeners, their filter chains returned by LDS (Listener discovery service ), Envoy HTTP route configuration that is returned by RDS (Route Discovery Service) and much more. Improved control plane monitoring The team has enhanced control plane monitoring by adding new metrics to monitor configuration state, metrics for sidecar injector and a new Grafana dashboard for Citadel. Users all over seem to be excited about this release.  https://twitter.com/HamzaZ21823474/status/1172235176438575105 https://twitter.com/vijaykodam/status/1172237003506798594 To know more about this news, check out the release notes. Other interesting news in Cloud & networking StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies        
Read more
  • 0
  • 0
  • 2729

article-image-gnu-community-announces-parallel-gcc-for-parallelism-in-real-world-compilers
Savia Lobo
16 Sep 2019
5 min read
Save for later

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Savia Lobo
16 Sep 2019
5 min read
Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. GCC is an optimizer compiler that automatically optimizes code when compiling. GCC optimization phase involves three steps: Inter Procedural Analysis (IPA): This builds a callgraph and uses it to decide how to perform optimizations. GIMPLE Intra Procedural Optimizations: This performs several hardware-independent optimizations inside the function. RTL Intra Procedural Optimizations: This performs several hardware-dependent optimizations inside the function. As IPA collects information and decides how to optimize all functions, it then sends a function to the GIMPLE optimizer, which then sends the function to the RTL optimizer, and the final code is generated. This process repeats for every function in the code. Also Read: Oracle introduces patch series to add eBPF support for GCC Why a Parallel GCC? The team designed the parallel architecture to increase parallelism and reduce overhead. While IPA finishes its analysis, a number of threads equal to the number of logical processors are spawned to avoid scheduling overhead. Further, one of those thread inserts all analyzed functions into a threadsafe producer-consumer queue, which all threads are responsible to consume. Once a thread has finished processing one function, it queries for the next function available in the queue, until it finds an EMPTY token. When it happens, the thread should finalize as there are no more functions to be processed. This architecture is used to parallelize per-function GIMPLE Intra Process Optimizations and can be easily extended to also support RTL Intra Process Optimizations. This, however, does not cover IPA passes nor the per-language Front End analysis. Code refactoring to achieve Parallel GCC The team refactored several parts of the GCC middle-end code in the Parallel GCC project. The team says there are still many places where code refactoring is necessary for this project to succeed. “The original code required a single function to be optimized and outputted from GIMPLE to RTL without any possible change of what function is being compiled,” the researchers wrote in their official blog. Several structures in GCC were made per-thread or threadsafe, either being replicated by using the C11 thread notation, by allocating the data structure in the thread stack, or simply inserting locks. “One of the most tedious parts of the job was detecting making several global variables threadsafe, and they were the cause of most crashes in this project. Tools made for detecting data-races, such as Helgrind and DRD, were useful in the beginning but then showed its limitations as the project advanced. Several race conditions had a small window and did not happen when the compiler ran inside these tools. Therefore there is a need for better tools to help to find global variables or race conditions,” the blog mentions. Performance results The team compiled the file gimple-match.c, the biggest file in the GCC project. This file has more than 100,000 lines of code, with around 1700 functions, and almost no loops inside these functions. The computer used in this Benchmark had an Intel(R) Core(TM) i5-8250U CPU, with 8Gb of RAM. Therefore, this computer had a CPU with 4 cores with Hyperthreading, resulting in 8 virtual cores. The following are the results before and after Intra Procedural GIMPLE parallelization. Source: gcc.gnu.org The figure shows our results before and after Intra Procedural GIMPLE parallelization. In this figure, we can observe that the time elapsed, dropped from 7 seconds to around 4 seconds with 2 threads and around 3 seconds with 4 threads, resulting in a speedup of 1.72x and 2.52x, respectively. Here we can also see that using Hyperthreading did not impact the result. This result was used to estimate the improvement in RTL parallelization. Source: gcc.gnu.org The above results show when compared with the total compilation time, there is a small improvement of 10% when compiling the file. Source: gcc.gnu.org In this figure using the same approach as in the previous graph, users can estimate a speedup of 1.61x in GCC when it gets parallelized by using the speedup information obtained in GIMPLE. The team has suggested certain To-Dos for users wanting to implement parallel GCC: Find and fix all race conditions in GIMPLE. There are still random crashes when a code is compiled using the parallel option. Make this GCC compile itself. Make this GCC pass all tests in the testsuite. Add support to a multithread environment to Garbage Collector. Parallelize RTL part. This will improve our current results, as indicated in Results chapter. Parallelize IPA part. This can also improve the time during LTO compilations. Refactor all occurrences of thread by allocating these variables as soon as threads are started, or at a pass execution. GCC project members say that this project is under development and still has several bugs. A user on Hacker News writes, “I look forward to this. One that will be important for reproducible builds is having tests for non-determinism. Having nondeterministic code gen in a compiler is a source of frustration and despair and sucks to debug.” To know about the Parallel GCC in detail, read the official document. Other interesting news in programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE
Read more
  • 0
  • 0
  • 3392

article-image-textmate-2-0-the-text-editor-for-macos-releases
Amrata Joshi
16 Sep 2019
3 min read
Save for later

TextMate 2.0, the text editor for macOS releases

Amrata Joshi
16 Sep 2019
3 min read
Yesterday, the team behind TextMate released TextMate 2.0. They announced that the code for TextMate 2.0 is available via the GitHub repository. In 2012, the team had open-sourced the alpha version of TextMate 2.0.  One of the reasons why the company open-sourced the code for TextMate 2.0 was to indicate that Apple isn’t limiting user and developer freedom on the Mac platform. In this release, the qualifier suffix in the version string has been deprecated and even the 32 bit APIs have been replaced. This release comes with improved accessibility support. What’s new in TextMate 2.0? Makes swapping easy This release allows users to easily swap pieces of code. Makes search results convenient TextMate presents the results of the search in a way that users can switch between matches, extract matched text and preview desired replacements. Version control  Users can see changes in the file browser view and they can check the changes made to lines of code in the editor view. Improved commands  TextMate features WebKit as well as a dialog framework for Mac-native or HTML-based interfaces. Converting code pieces into snippets  Users can now turn commonly used pieces of text or code into snippets with transformations, placeholders, and more. Bundles Users can use bundles for customization and a number of different languages, workflows, markup systems, and more.  Macros  TextMate features Marcos that eliminates repetitive work.  This project was supposed to release years ago and now it has finally released that makes a lot of users happy.  A user commented on GitHub, “Thank you @sorbits. For making TextMate in the first place all those years ago. And thank you to everyone who has and continues to contribute to the ongoing development of TextMate as an open source project. ~13 years later and this is still the only text editor I use… all day every day.” Another user commented, “Immense thanks to all those involved over the years!” A user commented on HackerNews, “I have a lot of respect for Allan Odgaard. Something happened, and I don't want to speculate, that caused him to take a break from Textmate (version 2.0 was supposed to come out 9 or so years ago). Instead of abandoning the project he open sourced it and almost a decade later it is being released. Textmate is now my graphical Notepad on Mac, with VS Code being my IDE and vim my text editor. Thanks Allan.” It is still not clear as to what took TextMate 2.0 this long to get released. According to a few users on HackerNews, Allan Odgaard, the creator of TextMate wanted to improve the designs in TextMate 1 and he realised that it would require a lot of work to do the same. So he had to rewrite everything that might have taken away his time. Another comment reads, “As Allan was getting less feedback about the code he was working on, and less interaction overall from users, he became less motivated. As the TextMate 2 project dragged past its original timeline, both Allan and others in the community started to get discouraged. I would speculate he started to feel like more of the work was a chore rather than a joyful adventure.” To know more about this news, check out the release notes. Other interesting news in Programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! GitHub Package Registry gets proxy support for the npm registry  
Read more
  • 0
  • 0
  • 3686
article-image-announcing-feathers-4-a-framework-for-real-time-apps-and-rest-apis-with-javascript-or-typescript
Bhagyashree R
16 Sep 2019
3 min read
Save for later

Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript

Bhagyashree R
16 Sep 2019
3 min read
Last month, the creator of the Feathers web-framework, David Luecke announced the release of Feathers 4. This release brings built-in TypeScript definitions, a framework-independent authentication mechanism, improved documentation, security updates in database adapters, and more. Feathers is a web framework for building real-time applications and REST APIs with JavaScript or TypeScript. It supports various frontend technologies including React, VueJS, Angular, and works with any backend. Read also: Getting started with React Hooks by building a counter with useState and useEffect It basically serves as an API layer between any backend and frontend: Source: Feathers Unlike traditional MVC and low-level HTTP frameworks that rely on routes, controllers, or HTTP requests and response handlers, Feathers uses services and hooks. This makes the application easier to understand and test and lets developers focus on their application logic regardless of how it is being accessed. This also enables developers to add new communication protocols without the need for updating their application code. Key updates in Feathers 4 Built-in TypeScript definitions The core libraries and database adapters in Feathers 4 now have built-in TypeScript definitions. With this update, you will be able to create a TypeScript Feathers application with the command-line interface (CLI). Read also: TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more A new framework-independent authentication mechanism Feathers 4 comes with a new framework-independent authentication mechanism that is both flexible and easier to use. It provides a collection of tools for managing username/password, JSON web tokens (JWT) and OAuth authentication, and custom authentication mechanisms. The authentication mechanism includes the following core modules: A Feathers service named ‘AuthenticationService’ to register authentication mechanisms and create authentication tokens. The ‘JWTStrategy’ authentication strategy for authenticating JSON web token service methods calls and HTTP requests. The ‘authenticate’ hook to limit service calls to an authentication strategy. Security updates in database adapters The database adapters in Feathers 4 are updated to include crucial security and usability features, some of which are: Querying by id: The database adapters now support additional query parameters for ‘get’, ‘remove’, ‘update’, and ‘patch’. In this release, a ‘NotFound’ error will be thrown if the record does not match the query, even if the id is valid. Hook-less service methods: Starting from this release, you can call a service method by simply adding ‘a _’ in front instead of using a hook. This will be useful in the cases when you need the raw data from the service without triggering any of its hooks. Multi updates: Mulitple update means you can create, update, or remove multiple records at once. Though it is convenient, it can also open your application to queries that you never intended for. This is why, in Feathers 4, the team has made multiple updates opt-in by disabling it by default. You can enable it by explicitly setting the ‘multi’ option. Along with these updates, the team has also worked on the website and documentation. “The Feathers guide is more concise while still teaching all the important things about Feathers. You get to create your first REST API and real-time web-application in less than 15 minutes and a complete chat application with a REST and websocket API, a web frontend, unit tests, user registration and GitHub login in under two hours,” Luecke writes. Read Luecke’s official announcement to know what else has landed in Feathers 4. Other news in web 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users How to integrate a Medium editor in Angular 8
Read more
  • 0
  • 0
  • 3347

article-image-france-and-germany-reaffirm-blocking-facebooks-libra-cryptocurrency
Sugandha Lahoti
16 Sep 2019
4 min read
Save for later

France and Germany reaffirm blocking Facebook’s Libra cryptocurrency

Sugandha Lahoti
16 Sep 2019
4 min read
Update Oct 14: After Paypal, Visa, Mastercard, eBay, Stripe, and Mercado Pago have also withdrawn from Facebook's Libra Association. These withdrawals leave Libra with no major US payment processor denting a big hole in Facebook's plans for a distributed, global cryptocurrency. David Marcus, Libra chief called this 'no great news in the short term'. https://twitter.com/davidmarcus/status/1182775730427572224 Update Oct 4: After countries, PayPal, a corporate backer is backing away from Facebook’s Libra Association the company announced on October 4. “PayPal has made the decision to forgo further participation in the Libra Association at this time and to continue to focus on advancing our existing mission and business priorities as we strive to democratize access to financial services for underserved populations,” PayPal said in a statement. In a joint statement released last week, Friday, Facebook and Germany have agreed to block Facebook’s Libra In Europe. France had been debating banning Libra for quite some time now. On Thursday at the OECD Conference 2019 on virtual currencies, French Finance Minister Bruno Le Maire told attendees that he would do everything in his power to stop Libra. He said, “I want to be absolutely clear: in these conditions, we cannot authorize the development of Libra on European soil.” Le Maire also was in favor of Eurozone issuing its own digital currency solutions, commonly dubbed ‘EuroCoin’ in the press. In a joint statement released Friday, the two governments of France and Germany wrote, “As already expressed during the meeting of G7 Finance Ministers and Central Bank’s Governers in Chantilly in July, France and Germany consider that the Libra project, as set out in Facebook’s blueprint, fails to convince that risks will be properly addressed. We believe that no private entity can claim monetary power, which is inherent to the sovereignty of Nations”. In June, Facebook had announced its ambitious plans to launch its own cryptocurrency, Libra in a move to disrupt the digital ecosystem. Libra’s launch alarmed certain experts who foresee this as a control shift of the economy from governments and their central banks to privately-held tech giants. Co-founder of Chainspace, Facebook’s blockchain acquisition said that he was “concerned about Libra’s model for decentralization”. He added, “My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” The US administration is also worried about a non-governmental currency in the hands of big tech companies. Early July, the US Congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. In an interview to Bloomberg, Mu Changchun, deputy director of the People’s Bank of China’s payments department wrote, “As a convertible crypto asset or a type of stablecoin, Libra can flow freely across borders, and it “won’t be sustainable without the support and supervision of central banks.” People enthusiastically shared this new development on Twitter. “Europe is leading the way to become the blockchain hub” https://twitter.com/AltcoinSara/status/1172582618971422720 “I always thought China would be first off the blocks on regulating Libra.” https://twitter.com/Frances_Coppola/status/1148420964264370179 “France blocks libra and says not tax for crypto to crypto exchanges. America still clinging on and stifling innovation hurting investors and developers” https://twitter.com/cryptoMD45/status/1172228992532983808 For now, a working group has been tasked by the G7 Finance Ministers to analyze the challenges posed by cryptocurrencies. Its final report will be presented in October. More interesting Tech News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Margrethe Vestager, EU’s Competition Commissioner gets another term and expanded power to make “Europe fit for the digital age” Hundreds of millions of Facebook users’ phone numbers found online, thanks to an exposed server
Read more
  • 0
  • 0
  • 2240

article-image-introducing-ixy-a-simple-user-space-network-driver-written-in-high-level-languages-like-rust-go-and-c-among-others
Vincy Davis
13 Sep 2019
6 min read
Save for later

Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others 

Vincy Davis
13 Sep 2019
6 min read
Researchers Paul Emmerich et al have developed a new simple user space network driver called ixy. According to the researchers, ixy is an educational user space network driver for the Intel ixgbe family of 10 Gbit/s NICs. Its goal is to show that writing a super-fast network driver can be surprisingly simple in high-level languages like Rust, Go, Java and C# among others. Ixy has no dependencies, high speed, and a simple-to-use interface for applications to be built on it. The researchers have published their findings in a paper titled The Case for Writing Network Drivers in High-Level Programming Languages. Initially, the researchers implemented ixy in C and then successfully implemented the same driver in other high-level languages such as Rust, Go, C#, Java, OCaml, Haskell, Swift, Javascript, and Python. The researchers have found that the Rust driver executes 63% more instructions per packet but is only 4% slower than a reference C implementation. Go’s garbage collector keeps latencies below 100 µs even under heavy load. Network drivers written in C are vulnerable to security issues Drivers written in C are usually implemented in production-grade server, desktop, and mobile operating systems. Though C has features required for low-level systems programming and fine-grained control over the hardware, they have vulnerabilities for security as “they are exposed to the external world or serve as a barrier isolating untrusted virtual machines”. The paper states that the C code “accounts for 66% of the code in Linux, but 39 out of 40 security bugs related to memory safety found in Linux in 2017 are located in drivers. These bugs could have been prevented by using high-level languages for drivers.” Implementing Rust, Go and other high level languages in ixy network driver Rust: A lightweight Rust struct is allocated for each packet that contains metadata and owns the raw memory. The compiler enforces that the object has a single owner and only the owner can access the object. This prevents use-after-free bugs despite using a completely custom allocator. Rust is the only language evaluated in the case study that protects against use-after-free bugs and data races in memory buffers. Go: It has an external memory that is wrapped in slices to provide bounds checks. The atomic package in Go also indirectly provides memory barriers and volatile semantics thus offering stronger guarantees. C#: The researchers have implemented two external memories out of the many available. It offers a more direct way to work with raw memory by offering full support for pointers with no bounds checks and volatile memory access semantics. Java: The researchers have targeted OpenJDK 12 which offers a non-standard way to handle external memory via the sun.misc.Unsafe object that provides functions to read and write memory with volatile access semantics. OCaml: OCaml Bigarrays backed by external memory is used for DMA buffers and PCIe resources, the allocation is done via C helper functions. The Cstruct library from the OCaml allowed researchers to access data in the arrays in a structured way by parsing definitions similar to C struct definitions and generating code for the necessary accessor functions. Haskell: It is a compiled functional language with garbage collection. The necessary low-level memory access functions are available via the Foreign package. Memory allocation and mapping is available via System.Posix.Memory. Swift: Its memory is managed via automatic reference counting, i.e., the runtime keeps a reference count for each object and frees the object once it is no longer in use. It also offers all the features necessary to implement drivers. JavaScript: ArrayBuffers is used to wrap external memory in a safe way, these arrays can then be accessed as different integer types using TypedArrays, circumventing JavaScript’s restriction to floating-point numbers. Memory allocation and physical address translation is handled via a Node.js module in C. Python: For this driver, the implementation was not explicitly optimized for performance and meant as a simple prototyping environment for PCIe drivers and as an educational tool. The researchers have provided primitives for PCIe driver development in Python. Rust is found to be the prime candidate for safer network drivers After implementing the network driver ixy in all high-level languages, the researchers conclude that Rust is the prime candidate for safer drivers. The paper states, “Rust’s ownership based memory management provides more safety features than languages based on garbage collection here and it does so without affecting latency.” Other languages like Go and C# are also a suitable language if the system can cope with sub-millisecond latency spikes due to garbage collection. Other languages like Haskell and OCaml will also be more useful if their performance is less critical than having a safe and correct system. Though Rust performs better than C, it is 4% slower than the C driver. The reason behind is that Rust applies bounds checks while C does not. Another reason is that C does not require a wrapper object for DMA buffers. Image Source: Research paper Users have found the result of this high-level language implementation of network drivers quite interesting. https://twitter.com/matthewwarren/status/1172094036297048068 A Redditor comments, “Wow, Rust and Go performed quite well. Maybe writing drivers in them isn't that crazy” Many developers are also surprised to see the results of this case study, especially the performance of Go and Swift. A comment on Hacker News says, “The fact that Go is slower than C# really amazes me! Not long ago I switched from C# to Go on a project for performance reasons, but maybe I need to go back.” Another Redditor says, “Surprise me a bit that Swift implementation is well below expected. Being Swift a compiled native ARC language, I consider the code must be revised.” Interested readers can watch a video presentation by Paul Emmerich on ‘How to write PCIe drivers in Rust, go, C#, Swift, Haskell, and OCaml’. Also, you can find more implementation details in the research paper. Other News in Tech New memory usage optimizations implemented in V8 Lite can also benefit V8 Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack
Read more
  • 0
  • 0
  • 4035
article-image-mozilla-announces-final-four-candidates-that-will-replace-its-irc-network
Bhagyashree R
13 Sep 2019
4 min read
Save for later

Mozilla announces final four candidates that will replace its IRC network

Bhagyashree R
13 Sep 2019
4 min read
In April this year, Mozilla announced that it would be shutting down its IRC network stating that it creates “unnecessary barriers to participation in the Mozilla project.” Last week, Mike Hoye, the Engineering Community Manager at Mozilla, shared the four final candidates for Mozilla’s community-facing synchronous messaging system: Mattermost, Matrix/Riot.im, Rocket.Chat, and Slack. Mattermost is a flexible, self-hostable, open-source messaging platform that enables secure team collaboration. Riot.im is an open-source instant messaging client that is based on the federated Matrix protocol. Rocket.Chat is also a free and open-source team chat collaboration platform. The only proprietary option in the shortlisted messaging platform list is Slack, which is a widely used team collaboration hub. Read also: Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Explaining how Mozilla shortlisted these messaging systems, Hoye wrote, “These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost.” He said that though there were a whole lot of options to choose from these were the ones that best-suited Mozilla’s current institutional needs and organizational goals. Mozilla will soon be launching official test instances of each of the candidates for open testing. After the one month trial period, the team will be taking feedback in dedicated channels on each of those servers. You can also share your feedback in #synchronicity on IRC.mozilla.org and a forum on Mozilla’s community Discourse instance that the team will be creating soon. Mozilla's timeline for transitioning to the finalized messaging system September 12th to October 9th: Mozilla will be running the proof of concept trials and accepting community feedback. October 9th to 30th: It will discuss the feedback, draft a proposed post-IRC plan, and get approval from the stakeholders. December 1st:  The new messaging system will be started. March 1st, 2020: There will be a transition time for support tooling and developers starting from the launch to March 1st, 2020. After this Mozilla’s IRC network will be shut down. Hoye shared that the internal Slack instance will still be running regardless of the result to ensure smooth communication. He wrote, “Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.” In a discussion on Hacker News, many rooted for Matrix. A user commented, “I am hoping they go with Matrix, least then I will be able to have the choice of having a client appropriate to my needs.” Another user added, “Man, I sure hope they go the route of Matrix! Between the French government and Mozilla, both potentially using Matrix would send a great and strong signal to the world, that matrix can work for everyone! Fingers crossed!” Many also appreciated that Mozilla chose three open-source messaging systems. A user commented, “It's great to see 3/4 of the options are open source! Whatever happens, I really hope the community gets behind the open-source options and don't let more things get eaten up by commercial silos cough slack cough.” Some were not happy that Zulip, an open-source group chat application, was not selected. “I'm sad to see Zulip excluded from the list. It solves the #1 issue with large group chats - proper threading. Nothing worse than waking up to a 1000 message backlog you have to sort through to filter out the information relevant to you. Except for Slack, all of their other choices have very poor threading,” a user commented. Check out the Hoye’s official announcement to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 4448

article-image-gnome-3-34-releases-with-tab-pinning-improved-background-panel-custom-folders-and-more
Amrata Joshi
13 Sep 2019
4 min read
Save for later

GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more!

Amrata Joshi
13 Sep 2019
4 min read
Yesterday, GNOME 3.34 was released as the latest version of GNOME, the open-source desktop environment for Unix-like operating systems GNOME 3.34 comes 6 months after the release of GNOME 3.32, with features such as custom folders, tab pinning, improved background panel, Boxes, and much more. This release also offers support for more than 34 languages with at least 80 percent of strings translated. [box type="shadow" align="" class="" width=""]Fun Fact: GNOME 3.34 release is termed “Thessaloniki” in recognition of GNOME’s primary annual conference GUADEC which was held in Thessaloniki, Greece.[/box] What’s new in GNOME 3.34? Visual refreshers This release includes visual refreshes for a number of applications, including the desktop. The background selection settings have been redesigned and it is now easy to select custom backgrounds. Custom folders This release introduces custom folders in the application overview where users can simply drag an application icon on top of another for creating a folder. Once all the icons have been dragged out, folders are automatically removed. Tab pinning GNOME 3.34 brings tab pinning, so users can now pin their favorite tabs and save them in the tab list. Improved ad-blocking In this release, the ad-blocking feature has now been updated to use WebKit content filters.  Improved box workflow GNOME’s virtual and remote machine manager, ‘boxes’ has received a number of improvements. Separate dialogs are now being used while adding a remote connection or external broker. The existing virtual machines can now be booted from an attached CD/DVD image so users can now simulate dual-booting environments. Game state can now be saved GNOME’s retro gaming application, ‘Games’ can now support multiple save states per game. Users can save as many game state snapshots as they want. Users can also export the Save states and share them or move them between devices. Improved Background panel The Background panel has been redesigned and it shows a preview of the selected background that is in use under the desktop panel and lock screen. Users can now add custom backgrounds by using the “Add Picture… button”. Improvements in Music application Music can now watch tracked sources including the Music folder in the Home directory for new or changed files and will now get updated automatically. This release features gapless playback and comes with an updated layout where the album, artist and playlist views have now been updated with a better layout. https://youtu.be/qAjPRr5SGoY Updates for Developers and System Administrators Flaptak 1.4 releases in sync with GNOME 3.34 Flatpak 1.4 has been released in sync with GNOME 3.34. Flatpak is central to GNOME’s developer experience plans and is a cross-distribution, cross-desktop technology for application building and distribution. New updates to Builder In this release, Builder, a GNOME IDE has also received a number of new features; it can now run a program in a container via podman. Even the Git integration has now been moved to an out-of-process gnome-builder-git daemon.  Sysprof has been integrated with core platform libraries In this release, Sysprof, the GNOME instrumenting and system profiling utility has been improved; it has now been integrated with a number of core platform libraries such as GTK, GJS, and Mutter. New applications: Icon Library and Icon Preview  In this release, two new applications, Icon Library and Icon Preview have been released, Icon Library can be used for browsing symbolic icons and Icon Preview helps designers and developers in creating and testing new application icons.  Improved font rendering library Pango, the font rendering library now makes rendering text easier as developers will now have more advanced control over their text rendering options.  To know more about this news, check out the release notes. Other interesting news in Programming GitHub Package Registry gets proxy support for the npm registry Project management platform ClubHouse announces ‘Free Plan’ for up to 10 users and a new documentation tool The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE    
Read more
  • 0
  • 0
  • 2471