Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-elixir-1-9-is-now-out-with-built-in-releases-a-new-streamlined-configuration-api-and-more
Bhagyashree R
25 Jun 2019
4 min read
Save for later

Elixir 1.9 is now out with built-in ‘releases’, a new streamlined configuration API, and more

Bhagyashree R
25 Jun 2019
4 min read
After releasing Elixir 1.8 in January, the team behind Elixir announced the release of Elixir 1.9 yesterday. This comes with a new ‘releases’ feature, the Config API for streamlined configuration, plus many other enhancements and bug fixes. Elixir is a functional, concurrent, general-purpose programming language that runs on the Erlang VM. Releases, a single unit for code and the runtime Releases are the most important feature that has landed in this version. A release is a “self-contained directory” that encapsulates not only your application code and its dependencies but also the whole Erlang VM and runtime. So, basically, it allows you to precompile and package your code and runtime in a single unit. You can then deploy this single unit to a target that is running on the same OS distribution and version as the machine running the ‘mix release’ command. Following are some of the benefits ‘releases’ provide: Code preloading: As releases run in embedded mode for loading code it loads all the modules beforehand. This makes your system ready for handling requests right after booting. Configuration and customization: It gives you “fine-grained control” over system configuration and the VM flags for starting the system. Multiple releases: It allows you to assemble different releases of the same application with different configurations. Management scripts: It provides management scripts to start, restart, connect to the running system remotely, execute RPC calls, run in daemon mode, run in Windows service mode, and more. Releases are also the last planned feature for Elixir and the team is not planning to add any other user-facing feature in the near future. The Elixir team shared in the announcement, “Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes, and improvements.” A streamlined configuration API This version comes with a more streamlined Elixir’s configuration API in the form of a new ‘Config’ module. Previously, the ‘Mix.Config’ configuration API was part of the Mix build tool. Beginning Elixir 1.9, the runtime configuration is now taken care of by ‘releases’ and Mix is no longer included in ‘releases’, this API is now ported to Elixir. “In other words, ‘use Mix.Config’ has been soft-deprecated in favor of import Config,” the announcement reads. Another crucial change in configuration is that starting from this release the ‘mix new’ command will not generate a ‘config/config.exs’ file. The ‘mix new --umbrella’ will also not generate a configuration for each child app as the configuration is now moved from individual umbrella application to the root of the umbrella. Many developers are excited about the ‘releases’ support. One user praised the feature saying, “Even without the compilation and configuration stuff, it's easier to put the release bundle in something basic like an alpine image, rather than keep docker image versions and app in sync.” However, as many of them currently rely on the Distillery tool for deployment they have some reservations about using releases as it lacks some of the features Distillery provides. “Elixir's `mix release` is intended to replace (or remove the need for) third-party packages like Distillery. However, it's not there yet, and Distillery is strictly more powerful at the moment. Notably, Elixir's release implementation does not support hot code upgrades. I use upgrades all the time, and won't be trying out Elixir's releases until this shortcoming is addressed,” a Hacker News user commented. Public opinion on Twitter was also positive: https://twitter.com/C3rvajz/status/1140351455691444225 https://twitter.com/rrrene/status/1143443465549897733 Why Ruby developers like Elixir How Change.org uses Flow, Elixir’s library to build concurrent data pipelines that can handle a trillion messages Introducing Mint, a new HTTP client for Elixir
Read more
  • 0
  • 0
  • 2346

article-image-gnu-apl-1-8-releases-with-bug-fixes-fft-gtk-re-and-more
Vincy Davis
24 Jun 2019
2 min read
Save for later

GNU APL 1.8 releases with bug fixes, FFT, GTK, RE and more

Vincy Davis
24 Jun 2019
2 min read
Yesterday, the GNU APL version 1.8 was released with bug fixes, FFT, GTK, RE, user defined APL commands and more. GNU APL is a free interpreter for the programming language APL. What's new in GNU APL 1.8? Bug fixes, FFT (fast fourier transforms; real, complex, and windows), GTK (create GUI windows from APL), RE (regular expressions), User-defined APL commands, An interface from Python into GNU APL.With this interface one can use APL's vector capabilities in programs written in Python. People are excited to use the GNU APL 1.8 version. A user on Hacker News states that “1Wow, each of ⎕FFT, ⎕GTK and ⎕RE are substantial and impressive additions! Thank you, and congratulations on the new release!” Another user says that “APL can do some pretty cool stuff” Another user comments “I'd like to play with this as it is a free APL that I could use for work without paying a license (like Dyalog APL requires). J is another free array language, but it doesn't use the APL characters that I enjoy. I've had a little trouble in the past getting it to install (this was version 1.7) on Ubuntu. Granted I've never been an expert at installing from source, but a more in-depth installation guide or YouTube tutorial would help some. Thanks for doing this btw! I hope to eventually get to check this out!” Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more! Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list
Read more
  • 0
  • 0
  • 2265

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 3325
Banner background image

article-image-rusts-original-creator-graydon-hoare-on-the-current-state-of-system-programming-and-safety
Bhagyashree R
20 Jun 2019
4 min read
Save for later

Rust’s original creator, Graydon Hoare on the current state of system programming and safety

Bhagyashree R
20 Jun 2019
4 min read
Back in July 2010, Graydon Hoare showcased the Rust programming language for the very first time at Mozilla Annual Summit. Rust is an open-source system programming language that was created with speed, memory safety, and parallelism in mind. Looking at Rust’s memory and thread safety guarantees, a supportive community, a quickly evolving toolchain, many major projects are being rewritten in Rust. And, one of the major ones was Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Mozilla is also using Rust for rewriting many other key parts of Firefox under Project Quantum. Fastly chose Rust to implement Lucet, its native WebAssembly compiler and runtime. More recently, Facebook also chose Rust to implement its controversial Libra blockchain. As the 9th anniversary of the day when Hoare first presented Rust in front of a large audience is approaching, The New Stack took a very interesting interview with him. In the interview, he talked about the current state of system programming, how safe he considers our current complex systems are, how they can be made safer, and more. Here are the key highlights from the interview: Hoare on a brief history of Rust Hoare started working on Rust as a side-project in 2006. Mozilla, his employer at that time, got interested in the project and provided him a team of engineers to help him in the further development of the language. In 2013, he experienced burnout and decided to step down as a technical lead. After working on some less-time-sensitive projects, he quit Mozilla and worked for the payment network, Stellar. In 2016, he got a call from Apple to work on the Swift programming language. Rust is now being developed by the core teams and an active community of volunteer coders. This programming language that he once described as “spare-time kinda thing” is being used by many developers to create a wide range of new software applications from operating systems to simulation engines for virtual reality. It was also "the most loved programming language" in the Stack Overflow Developer Survey for four years in a row (2016-2019). Hoare was very humble about the hard work and dedication he has put into creating the Rust programming language. When asked to summarize Rust’s history he simply said that “we got lucky”.  He added, “that Mozilla was willing to fund such a project for so long; that Apple, Google, and others had funded so much work on LLVM beforehand that we could leverage; that so many talented people in academia, industry and just milling about on the internet were willing to volunteer to help out.” The current state of system programming and safety Hoare considers the state of system programming language “healthy” as compared to the starting couple of decades in his career. Now, it is far easier to sell a language that is focused on performance and correctness. We are seeing more good languages coming into the market because of the increasing interaction between academia and industry. When asked about safety, Hoare believes that though we are slowly taking steps towards better safety, the overall situation is not getting better. He attributes building a number of new complex computing systems is making it worse. He said, “complexity beyond comprehension means we often can’t even define safety, much less build mechanisms that enforce it.” Another reason according to him is the huge number of vulnerable software present in the field that can be exploited anytime by a bad actor. For instance, on Tuesday, a zero-day vulnerability was fixed in Firefox that was being “exploited in the wild” by attackers. “Like much of the legacy of the 20th century, there’s just a tremendous mess in software that’s going to take generations to clean up, assuming humanity even survives that long,” he adds. How system programming can be made safer Hoare designed Rust with safety in mind. Its rich type system and ownership model ensures memory and thread safety. However, he suggests that we can do a lot better when it comes to safety in system programming. He listed a bunch of new improvements that we can implement, “information flow control systems, effect systems, refinement types, liquid types, transaction systems, consistency systems, session types, unit checking, verified compilers and linkers, dependent types.” Hoare believes that there are already many features suggested by academia. The main challenge for us is to implement these features “in a balanced, niche-adapted language that’s palatable enough to industrial programmers to be adopted and used.” You can read Hoare’s full interview on The New Stack. Rust 1.35.0 released Rust shares roadmap for 2019 Rust 1.34 releases with alternative cargo registries, stabilized TryFrom and TryInto, and more
Read more
  • 0
  • 0
  • 8960

article-image-curls-lead-developer-announces-googles-plan-to-reimplement-curl-in-libcrurl
Amrata Joshi
20 Jun 2019
4 min read
Save for later

Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”

Amrata Joshi
20 Jun 2019
4 min read
Yesterday, Daniel Stenberg, the lead developer of curl announced that Google is planning to reimplement curl in libcrurl and it will be renamed as libcurl_on_cronet. https://twitter.com/bagder/status/1141588339100934149 The official blog post reads, “The Chromium bug states that they will create a library of their own (named libcrurl) that will offer (parts of) the libcurl API and be implemented using Cronet.” Daniel Stenberg explains the reason for reimplementation, “Implementing libcurl using Cronet would allow developers to take advantage of the utility of the Chrome Network Stack, without having to learn a new interface and its corresponding workflow. This would ideally increase ease of accessibility of Cronet, and overall improve adoption of Cronet by first-party or third-party applications.” According to him, the team might also hope that 3rd party applications can switch to this library without the need for switching to another API. So if this works then there is a possibility that the team might also create “crurl” tool which then will be their own version of the tool using their own library. Daniel Stenberg states in the post, “In itself is a pretty strong indication that their API will not be fully compatible, as if it was they could just use the existing curl tool…” He writes, “As the primary author and developer of the libcurl API and the libcurl code, I assume that Cronet works quite differently than libcurl so there’s going to be quite a lot of wrestling of data and code flow to make this API work on that code.” The libcurl API is quite versatile and has developed over a period of almost 20 years. There’s a lot of functionality, options and subtle behavior that may or may not be easy to mimic. If the subset is limited to a number of functions and libcurl options and they are made to work exactly the way they have been documented, then it could be difficult as well as time-consuming. He writes, “I don’t think applications will be able to arbitrarily use either library for a very long time, if ever. libcurl has 80 public functions and curl_easy_setopt alone takes 268 different options!” Read Also: Cisco merely blacklisted a curl instead of actually fixing the vulnerable code for RV320 and RV325 According to Stenberg, there’s still no clarity on API/ABI stability or how are they planning to ship or version their library. Stenberg writes, “There’s this saying about imitation and flattery but getting competition from a giant like Google is a little intimidating. If they just put two paid engineers on their project they already have more dedicated man power than the original libcurl project does…” So, the team from Google’s end finds and fixes issues in the code and API such that curl improves. This makes more users aware of libcurl and its API and the team behind curl make it easier for users and applications to do safe and solid Internet transfers. According to Stenberg, applications need to be aware of the APIs they work with to avoid confusion. He also highlighted that users might have been confused because of the names, “libcrurl” and “crurl” as they appear to be like typos. He added, “Since I don’t think “libcrurl” will be able to offer a compatible API without a considerable effort, I think applications will need to be aware of which of the APIs they work with and then we have a “split world” to deal with for the foreseeable future and that will cause problems, documentation problems and users misunderstanding or just getting things wrong.” “Their naming will possibly also be the reason for confusion since “libcrurl” and “crurl” look so much like typos of the original names,” he said. To know more about this news, check out the blog post by Daniel Stanberg. Google Calendar was down for nearly three hours after a major outage How Genius used embedded hidden Morse code in lyrics to catch plagiarism in Google search results Google, Facebook and Twitter submit reports to EU Commission on progress to fight disinformation  
Read more
  • 0
  • 0
  • 2410

article-image-qt-5-13-releases-with-a-fully-supported-webassembly-module-chromium-73-support-and-more
Bhagyashree R
20 Jun 2019
3 min read
Save for later

Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more

Bhagyashree R
20 Jun 2019
3 min read
Yesterday, the team behind Qt announced the release of Qt 5.13. This release comes with fully-supported Qt for WebAssembly, Chromium 73-based QT WebEngine, and many other updates. In this release, the Qt community and the team have focused on improving the tooling to make designing, developing, and deploying software with Qt more efficient. https://twitter.com/qtproject/status/1141627444933398528 Following are some of Qt 5.13 highlights: Fully-supported Qt for WebAssembly Qt for WebAssembly makes it possible to build Qt applications for web browsers. The team previewed this platform in Qt 5.12 and beginning this release Qt for WebAssembly is fully-supported. This module uses Emscripten, the LLVM to JavaScript compiler to compile Qt applications for a web server. This will allow developers to run their native applications in any browser provided it supports WebAssembly. Updates in the QT QML module The QT QML module enables you to write applications and libraries in the QML language. Qt 5.13 comes with improved support for enums declared in  C++. With this release, JavaScript “null” as the binding value will be optimized at compile time. Also, QML will now generate function tables on 64-bit Windows making it possible to unwind the stack through JITed functions. Updates in Qt Quick and Qt Quick Controls 2 Qt Quick is the standard library for writing QML applications, which provides all the basic types required for creating user interfaces. With this release, support is added to TableView that allows hiding rows and columns. Qt Quick Controls 2 provides a set of UI controls for creating user interfaces. This release brings a new control named SplitView using which you can lay out items horizontally or vertically with a draggable splitter between each item. Additionally, the team has also added a cache property to the icon. Qt WebEngine Qt WebEngine provides a web browser engine that makes embedding content from the web into your applications easier on platforms that do not have a native web engine. This engine uses the code from the open-source Chromium project. Qt WebEngine is now based on Chromium 73. This latest version supports PDF viewing via an internal Chromium extension, Web Notifications API, and thread-safe and page-specific URL request interceptors. It also comes with an application-local client certificate store and client certificate support from QML. Lars Knoll, Qt’s CTO and Tuukka Turunen, Qt’s Head of R&D will be holding a webinar on July 2 to summarize all the news around Qt 5.13. Read the official announcement on Qt’s official website to know more in detail. Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial] Qt Creator 4.9 Beta released with QML support, programming language support and more!
Read more
  • 0
  • 0
  • 3545
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 4443

article-image-docker-and-microsoft-collaborate-over-wsl-2-future-of-docker-desktop-for-windows-is-near
Amrata Joshi
18 Jun 2019
5 min read
Save for later

Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near

Amrata Joshi
18 Jun 2019
5 min read
WSL was a great effort towards emulating a Linux Kernel on top of Windows. But due to certain differences between Windows and Linux, it was quite impossible to run the Docker Engine and Kubernetes directly inside WSL. So, the Docker Desktop developed an alternative solution with the help of Hyper-V VMs and LinuxKit to achieve the seamless integration. On 16th June, Docker announced WSL 2 with a major architecture change where the company will provide a real Linux Kernel running inside a lightweight VM instead of emulation. This approach is architecturally similar to LinuxKit and Hyper-V but  WSL 2 has an additional benefit that it is more lightweight and tightly integrated with Windows. Even the Docker daemon runs properly on it with great performance. The team further announced that they are working on new version of Docker Desktop that would leverage WSL 2 and the public preview will be expected in July. The official blog reads, “We are very excited about this technology, and we are happy to announce that we are working on a new version of Docker Desktop leveraging WSL 2, with a public preview in July. It will make the Docker experience for developing with containers even greater, unlock new capabilities, and because WSL 2 works on Windows 10 Home edition, so will Docker Desktop.” In context with integration of Microsoft the blog reads, “As part of our shared effort to make Docker Desktop the best way to use Docker on Windows, Microsoft gave us early builds of WSL 2 so that we could evaluate the technology, see how it fits with our product, and share feedback about what is missing or broken. We started prototyping different approaches and we are now ready to share a little bit about what is coming in the next few months.” The future of Docker Desktop will have WSL 2 The team will replace the Hyper-V VM by a WSL 2 integration package. The package will offer the same features as the current Docker Desktop including automatic updates, transparent HTTP proxy configuration, VM: Kubernetes 1-click setup, access to the daemon from Windows, etc. This package will contain both the server-side components that are required to run Docker and Kubernetes and the CLI tools to interact with those components within WSL. WSL 2 will enable seamless integration with Linux With WSL 2 integration, users will experience seamless integration with Windows, but even Linux programs that are running inside WSL will be able to do the same. This creates a huge impact for developers that are working on projects related to the Linux environment, or with a build process for Linux. Now there won’t be a need for maintaining both Linux and Windows build scripts. For example, a developer at Docker can now work on the Linux Docker daemon on Windows, using the same set of tools and scripts as a developer on a Linux machine. The bind mounts from WSL will now support inotify events (inotify is a Linux kernel subsystem) and will have almost identical I/O performance as on a native Linux machine. This will solve one of the major Docker Desktop issues with I/O-heavy toolchains. This feature will benefit NodeJS, PHP and other web development tools. Improved performance and reduced memory consumption The VM has been setup to use dynamic memory allocation and schedule work on all the Host CPUs. It will be consuming lesser memory which would be in the limit of what the host can provide. Docker Desktop will leverage this for improving its resource consumption and use CPU and memory as per its needs. The CPU/Memory intensive tasks such as building a container will also run much faster. Leveraging WSL 2 Docker desktop will support bind mount One of the major problems that the users have with Docker Desktop is the reliability of Windows file bind mounts. The current implementation is dependent on Samba Windows service, which could be deactivated, blocked by enterprise GPOs or even blocked by 3rd party firewalls etc. But Docker Desktop with WSL 2 solves these issues by leveraging WSL features to implement the bind mounts of Windows files.   Few users seem to be unhappy with this news, one of them commented on HackerNews, “So, I think the main sticking point here is the lock-in of Hyper-V. By making a new popular feature completely dependent on a technology that explicitly disables the use of competitive hypervisors, they're giving with one hand and taking with the other. If I was on VM-Ware's executive team, I'd be seriously thinking about filing an anti-trust complaint and the open source community should be thinking about whether submarining virtualbox is worth what Microsoft is doing here.” Others think that WSL 2 is a full Linux kernel that runs in Hyper-V. Another comment reads, “WSL 2 is a full Linux kernel running in Hyper-V rather than an emulation layer on top of NT.” To know more about this news, check out the official post by Docker. How to push Docker images to AWS’ Elastic Container Registry(ECR) [Tutorial] All Docker versions are now vulnerable to a symlink race attack Docker announces collaboration with Microsoft’s .NET at DockerCon 2019      
Read more
  • 0
  • 0
  • 3176

article-image-pull-panda-is-now-a-part-of-github-code-review-workflows-now-get-better
Amrata Joshi
18 Jun 2019
4 min read
Save for later

Pull Panda is now a part of GitHub; code review workflows now get better!

Amrata Joshi
18 Jun 2019
4 min read
Yesterday, the team at GitHub announced that they have acquired Pull Panda for an undisclosed amount, to help teams create more efficient and effective code review workflows on GitHub. https://twitter.com/natfriedman/status/1140666428745342976 Pull Panda helps thousands of teams to work together on the code and further helps in improving their process by combining three new apps including Pull Reminders, Pull Analytics, and Pull Assigner. Pull Reminders: Users can get a prompt in Slack whenever a collaborator needs a review. It facilitates automatic reminders that ensures the pull requests aren’t missed. Pull Analytics: Users can now get real-time insight and make data-driven improvements for creating a more transparent and accountable culture. Pull Assigner: Users can automatically distribute code across their team such that no one gets overloaded and knowledge could be spread around. Pull Panda helps the team to ship faster and gain insight into bottlenecks in the process. Abi Noda, the founder of Pull Panda highlighted the major reasons for starting Pull Panda. According to him, there were two major pain points, the first one was that on fast moving teams, usually pull requests are forgotten which causes delays in the code reviews and eventually delays in shipping new features to the customers. Abi Noda stated in a video, “I started Pull Panda to solve two major pain points that I had as an engineer and manager at several different companies. The first problem was that on fast moving teams, pull requests easily are forgotten about and often slip through the cracks. This leads to frustrating delays in code reviews and also means it takes longer to actually ship new features to your customers.” https://youtu.be/RtZdbZiPeK8 The team built Pull Reminders which is a GitHub app that automatically notifies the team about their code reviews, to solve the above mentioned problem. The second problem was that it was difficult to measure and understand the team's development process for identifying bottlenecks. To solve this issue, the team built Pull Analytics to provide real-time insights into the software development process. It also highlights the current code review workload across the team such that the team knows who is overloaded and who might be available. Also, a lot of customers have discovered that the majority of their code reviews were done by the same set of people on the team. For solving this problem,  the team built Pull Assigner that offers two algorithms for automatically assigning reviewers. First is the Load Balance, which equalizes the number of reviews so everyone on the team does the same number of reviews. The second one is the round robin algorithm that randomly assigns additional reviewers such that knowledge can be spread across the team. Nat Friedman, CEO at GitHub said, “We'll be integrating everything Abi showed you directly into GitHub over the coming months. But if you're impatient, and you want to get started now, I'm happy to announce that all three of the Pull Panda products are available for free in the GitHub marketplace starting today. So we hope you enjoy using Pull Panda and we look forward to your feedback. Goodbye. It's over.” Pull Panda will no longer offer the Enterprise plan. Existing customers of Enterprise plans can continue to use their on-premises offering. All paid subscriptions have been converted to free subscriptions. New users can install Pull Panda for their organizations for free at our website or GitHub Marketplace. The official GitHub blog post reads, “We plan to integrate these features into GitHub but hope you’ll start benefiting from them right away. We’d love to hear what you think as we continue to improve how developers work together on GitHub.” To know more about this news, check out GitHub’s post. GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Github Sponsors: Could corporate strategy eat FOSS culture for dinner? GitHub Satellite 2019 focuses on community, security, and enterprise          
Read more
  • 0
  • 0
  • 5023

article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 7868
article-image-net-core-3-0-preview-6-is-available-packed-with-updates-to-compiling-assemblies-optimizing-applications-asp-net-core-and-blazor
Amrata Joshi
13 Jun 2019
4 min read
Save for later

.NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor

Amrata Joshi
13 Jun 2019
4 min read
Yesterday, the team at Microsoft announced that .NET Core 3.0 Preview 6 is now available. It includes updates for compiling assemblies for improved startup, optimizing applications for size with linker and EventPipe improvements. The team has also released new Docker images for Alpine on ARM64. Additionally they have made updates to ASP.NET Core and Blazor. The preview comes with new Razor and Blazor directive attributes as well as authentication, authorization support for Blazor apps and much more. https://twitter.com/dotnet/status/1138862091987800064 What’s new in the .NET Core 3.0 Preview 6 Docker images The .NET Core Docker images and repos including microsoft/dotnet and microsoft/dotnet-samples are updated. The Docker images are now available for .NET Core as well as ASP.NET Core on ARM64. Event Pipe enhancements With Preview 6, Event Pipe now supports multiple sessions, users can consume events with EventListener in-proc and have out-of-process event pipe clients. Assembly linking .NET core 3.0 SDK offers a tool that can help in reducing the size of apps by analyzing IL linker and cutting down on unused assemblies. Improving the startup time Users can improve the startup time of their .NET Core application by compiling their application assemblies as ReadyToRun (R2R) format. R2R, a form of ahead-of-time (AOT) compilation is supported with .NET Core 3.0. But it can’t be used with earlier versions of .NET Core. Additional functionality The Native Hosting sample posted by the team lately, demonstrates an approach for hosting .NET Core in a native application. The team is now exposing general functionality to .NET Core native hosts as part of .NET Core 3.0.  The functionality is majorly related to assembly loading that makes it easier to produce native hosts. New Razor features In this release, the team has added support for the new Razor features which are as follows: @attribute This release comes with new @attribute directive that adds specified attribute to the generated class. @code This release comes with new @code directive that is used in .razor files for specifying a code block for adding to the generated class as additional members. @key In .razor files, the new @key directive attribute is used for specifying a value that can be used by the Blazor diffing algorithm to preserve elements or components in a list. @namespace The @namespace directive works in pages and views apps and it is also supported with components (.razor). Blazor directive attributes In this Blazor release, the team has added standardized common syntax for directive attributes on Blazor which makes the Razor syntax used by Blazor more consistent and predictable. Event handlers In Blazor, event handlers now use the new directive attribute syntax than the normal HTML syntax. This new syntax is similar to the HTML syntax, but it has @ character which makes C# event handlers distinct from JS event handlers. Authentication and authorization support With this release, Blazor has a built-in support for handling authentication as well as authorization. The server-side Blazor template also supports the options that are used for enabling the standard authentication configurations with ASP.NET Core Identity, Azure AD, and Azure AD B2C. Certificate and Kerberos authentication to ASP.NET Core Preview 6 comes along with a Certificate and Kerberos authentication to ASP.NET Core. Certificate authentication requires users to configure the server for accepting certificates, and then add the authentication middleware in Startup.Configure and the certificate authentication service in Startup.ConfigureServices. Users are happy with this news and they think the updates will be useful. https://twitter.com/gcaughey/status/1138889676192997380 https://twitter.com/dodyg/status/1138897171636531200 https://twitter.com/acemod13/status/1138907195523907584 To know more about this news, check out the official blog post. .NET Core releases May 2019 updates An introduction to TypeScript types for ASP.NET core [Tutorial] What to expect in ASP.NET Core 3.0
Read more
  • 0
  • 0
  • 3129

article-image-scala-2-13-is-here-with-an-overhauled-collections-improved-compiler-performance-and-more
Bhagyashree R
12 Jun 2019
2 min read
Save for later

Scala 2.13 is here with overhauled collections, improved compiler performance, and more!

Bhagyashree R
12 Jun 2019
2 min read
Last week, the Scala team announced the release of Scala 2.13. This release brings a number of improvements including overhauled standard library collections, a 5-10% faster compiler, and more. Overhauled standard library collections The major highlight of Scala 2.13 is standard library collections that are now better in simplicity, performance, and safety departments as compared to previous versions.  Some of the important changes made in collections include: Simpler method signatures The implicit CanBuildFrom parameter was one of the most powerful abstractions in the collections library. However, it used to make method signatures too difficult to understand. Beginning this release, transformation methods will no longer take an implicit ‘CanBuildFrom’ parameter making the resulting code simpler and easier to understand. Simpler type hierarchy The package scala.collection.parallel is now a part of the Scala standard module. This module will now come as a separate JAR that you can omit from your project if it does not uses parallel collections. Additionally, Traversable and TraversableOnce are now deprecated. New concrete collections The Stream collection is now replaced by LazyList that evaluates elements in order and only when needed. A new mutable.CollisionProofHashMap collection is introduced that implements mutable maps using a hashtable with red-black trees in the buckets. This provides good performance even in worst-case scenarios on hash collisions. The mutable.ArrayDeque collection is added, which is a double-ended queue that internally uses a resizable circular buffer. Improved Concurrency In Scala 2.13, Futures are “internally redesigned” to ensure it provides expected behavior in a broader set of failures. The updated Futures will also provide a foundation for increased performance and support more robust applications. Changes in the language The updates in language include the introduction of literal-based singleton types, partial unification on by default, and by-name method arguments extended to support both implicit and explicit parameters. Compiler updates The compiler will now be able to perform a deterministic and reproducible compilation. This essentially means that it will be able to generate identical output for identical input in more cases. Also, operations on collections and arrays are now optimized making the compiler 5-10% better compared to Scala 2.12. These were some of the exciting updates in Scala 2.13. For a detailed list, check out the official release notes. How to set up the Scala Plugin in IntelliJ IDE [Tutorial] Understanding functional reactive programming in Scala [Tutorial] Classifying flowers in Iris Dataset using Scala [Tutorial]
Read more
  • 0
  • 0
  • 3716

article-image-python-3-8-beta-1-is-now-ready-for-you-to-test
Bhagyashree R
11 Jun 2019
2 min read
Save for later

Python 3.8 beta 1 is now ready for you to test

Bhagyashree R
11 Jun 2019
2 min read
Last week, the team behind Python announced the release of Python 3.8.0b1, which is the first out of the four planned beta release previews of Python 3.8. This release marks the beginning of the beta phase where you can test new features and make your applications ready for the new release. https://twitter.com/ThePSF/status/1137797764828553222 These are some of the features that you will see in the upcoming Python 3.8 version: Assignment expressions Assignment expressions were proposed in PEP 572, which was accepted after an extensive discussion among the Python developers. This feature introduces a new operator (:=) with which you will be able to assign variables within an expression. Positional-only arguments In Python, you can pass an argument to a function by position, keyword, or both. API designers may sometimes want to restrict passing the arguments by position only. To easily implement this, Python 3.8 will come with a new marker (/) to indicate that the arguments to its left are positional only. This is similar to * that indicates the arguments to its right are keyword only. Python Initialization Configuration Python is highly configurable, but the configurations are scattered all around the code. This version introduces new functions and structures to the Python Initialization C API to provide Python developers a “straightforward and reliable way” to configure Python. The Vectorcall protocol for CPython The calling convention impacts the flexibility and performance of your code considerably. To optimize the calling of objects, this release introduces Vectorcall protocol and a calling convention that is already being used internally for Python and built-in functions. Runtime audit hooks Python 3.8 will come with two new APIs: Audit Hook and Verified Open Hook to give you insights into a running Python application. These will facilitate both application developers and system administrators to integrate Python into their existing monitoring systems. As this is a beta release, developers should refrain from using it in production environments. The next beta release is currently planned to release on July 1st. To know more about Python 3.8.0b1, check out the official announcement. Which Python framework is best for building RESTful APIs? Django or Flask? PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure Python 3.8 alpha 2 is now available for testing
Read more
  • 0
  • 0
  • 2945
article-image-grapheneos-now-comes-with-new-device-support-for-auditor-app-hardened-malloc-and-a-new-website
Amrata Joshi
11 Jun 2019
4 min read
Save for later

GrapheneOS now comes with new device support for Auditor app, Hardened malloc and a new website

Amrata Joshi
11 Jun 2019
4 min read
GrapheneOS, an open source privacy and security focused mobile OS comes with Android app compatibility. The GrapheneOS releases are supported by the Auditor app as well as attestation service for hardware-based attestation. The GrapheneOS research and engineering project has been in progress for over 5 years. In March, the AndroidHardening project got renamed to GrapheneOS. Two days ago, GrapheneOS released a new website grapheneos.org with additional documentation, tutorials and coverage of topics related to software, firmware and hardware as well as privacy/security features expected in the future. The team has also released a new version PQ3A.190605.003.2019.06.03.18 with device support, Auditor app and Hardened malloc among other fixes. Changes in GrapheneOS project Auditor: update to version 12 The Auditor app has an added support for verifying CalyxOS on the Pixel 2, Pixel 2 XL, Pixel 3 and Pixel 3 XL and even verified boot hash display has been added. Auditor uses hardware security features on supported devices for validating the integrity of the operating system from another Android device. The Auditor app will now also verify that the device is running the stock operating system with the bootloader locked and further will check that no tampering has occurred with the operating system. The list of supported devices for the auditor app include BlackBerry Key2, BQ Aquaris X2 Pro, Google Pixel, 2, Google Pixel 2 XL, Google Pixel 3, Google Pixel 3 XL, Google Pixel 3a, Google Pixel 3a XL, Huawei Honor 7A Pro, Huawei Honor 10, and more. Full list here. https://twitter.com/GrapheneOS/status/1125928692671057920 Hardened malloc Hardened malloc is a security-focused general purpose memory allocator that provides the malloc API along with various extensions. This security-focused design leads to lesser metadata overhead and memory waste from fragmentation than a traditional allocator design. https://twitter.com/GrapheneOS/status/1113556017768325120 It also offers substantial hardening against heap corruption vulnerabilities and aims to provide a decent overall performance focused on long-term performance and memory usage. Hardened malloc currently supports Bionic (Android), musl and glibc and it may also support other non-Linux operating systems in the future. There's custom integration along with other hardening features for which has also been planned for musl in the future. The hardened_malloc for GrapheneOS only is further expanded to workaround for Pixel 3 and Pixel 3 XL camera issues. GrapheneOS now needs to move towards a microkernel-based model with a Linux compatibility layer and it needs to adopt virtualization-based isolation. According to the team, the project will have to move into the hardware space in the long term. Restoration of past features Restoration of past features since the 2019.05.18.20 release include: Exec spawning while using debugging options has been disabled. Exec spawning has been enabled by default. Verizon visual voicemail support has been enabled. Toggle for disabling newly added USB devices has been added to Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Properties for controlling deny_new_usb has been added Implementation of dynamic deny_new_usb toggle mode deny_new_usb feature is set to dynamic by default Many are happy with this latest update. A user commented on HackerNews, “They're making good progress and I can't wait to be able to update my handheld device with mainline pieces for as long as anyone who still uses one cares to update it. Currently my Samsung Android device is at Dec 2018 patchlevel and nothing I can do about it.” Few others are skeptical about this news, another user commented, “There is security, and then there is freedom. You can have the most secure system in the world -- but if there are state sponsored, or company back doors it means nothing.” To know more about this news, check out the official website. AndroidHardening Project renamed to GrapheneOS to reflect progress and expansion of the project GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database      
Read more
  • 0
  • 0
  • 7022

article-image-pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility
Amrata Joshi
11 Jun 2019
3 min read
Save for later

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Amrata Joshi
11 Jun 2019
3 min read
Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learning research reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce. With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves. PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models. For example, one can check out the torchvision, huggingface-bert and gan-model-zoo repositories. Considering the case of torchvision hubconf.py: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don't need separate entry-points. A hubconf.py can help users to send a pull request based on the template mentioned on the GitHub page. The official blog post reads, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.” PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples: Explore available entrypoints: With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as bertTokenizer for preprocessing in the BERT models and making the user workflow smoother. Load a model: With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model. Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, “I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.” Another user commented, “This will also make things easier for people writing algorithms on top of one of the base models.” To know more about this news, check out PyTorch’s blog post. Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet   .
Read more
  • 0
  • 0
  • 2924