Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1081 Articles
article-image-node-js-13-releases-upgraded-v8-full-icu-support-stable-worker-threads-api
Fatema Patrawala
23 Oct 2019
4 min read
Save for later

Node.js 13 releases with an upgraded V8, full ICU support, stable Worker Threads API and more

Fatema Patrawala
23 Oct 2019
4 min read
Yesterday was a super exciting day for Node.js developers as Node.js foundation announced of Node.js 12 transitions to Long Term Support (LTS) with the release of Node.js 13. As per the team, Node.js 12 becomes the newest LTS release along with version 10 and 8. This release marks the transition of Node.js 12.x into LTS with the codename 'Erbium'. The 12.x release line now moves into "Active LTS" and will remain so until October 2020. Then it will move into "Maintenance" until the end of life in April 2022. The new Node.js 13 release will deliver faster startup and better default heap limits. It includes updates to V8, TLS and llhttp and new features like diagnostic report, bundled heap dump capability and updates to Worker Threads, N-API, and more. Key features in Node.js 13 Let us take a look at the key features included in Node.js 13. V8 gets an upgrade to V8 7.8 This release is compatible with the new version V8 7.8. This new version of the V8 JavaScript engine brings performance tweaks and improvements to keep Node.js up with the ongoing improvements in the language and runtime. Full ICU enabled by default in Node.js 13 As of Node.js 13, full-icu is now available as default, which means hundreds of other local languages are now supported out of the box. This will simplify development and deployment of applications for non-English deployments. Stable workers API Worker Threads API is now a stable feature in both Node.js 12 and Node.js 13. While Node.js already performs well with the single-threaded event loop, there are some use-cases where additional threads can be leveraged for better results. New compiler and platform support Node.js and V8 continue to embrace newer C++ features and take advantage of newer compiler optimizations and security enhancements. With the release of Node.js 13, the codebase will now require a minimum of version 10 for the OS X development tools and version 7.2 of the AIX operating system. In addition to this there has been progress on supporting Python 3 for building Node.js applications. Systems that have Python 2 and Python 3 installed will still be able to use Python 2, however, systems with only Python 3 should now be able to build using Python 3. Developers discuss pain points in Node.js 13 On Hacker News, users discuss various pain-points in Node.js 13 and some of the functionalities missing in this release. One of the users commented, “To save you the clicks: Node.js 13 doesn't support top-level await. Node includes V8 7.8, released Sep 27. Top-level await merged into V8 on Sep 24, but didn't make it in time for the 7.8 release.” Response on this comment came in from V8 team, they say, “TLA is only in modules. Once node supports modules, it will also have TLA. We're also pushing out a version with 7.9 fairly soonish.” Other users discussed how Node.js performs with TypeScript, “I've been using node with typescript and it's amazing. VERY productive. The key thing is you can do a large refactoring without breaking anything. The biggest challenge I have right now is actually the tooling. Intellij tends to break sometimes. I'm using lerna for a monorepo with sub-modules and it's buggy with regular npm. For example 'npm audit' doesn't work. I might have to migrate to yarn…” If you are interested to know more about this release, check out the official Node.js blog post as well as the GitHub page for release notes. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more Google is planning to bring Node.js support to Fuchsia
Read more
  • 0
  • 0
  • 8077

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 3805

article-image-python-3-8-available-walrus-operator-positional-only-parameters-vectorcall
Sugandha Lahoti
15 Oct 2019
6 min read
Save for later

Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more

Sugandha Lahoti
15 Oct 2019
6 min read
Yesterday, the latest version of the Python programming language, Python 3.8 was made available with multiple new improvements and features. Features include the new walrus operator and positional-only parameters, runtime audit Hooks, Vectorcall, a fast calling protocol for CPython and more. Earlier this month, the team behind Python announced the release of Python 3.8b2, the second of four planned beta releases. What’s new in Python 3.8 PEP 572: New walrus operator in assignment expressions Python 3.8 has a new walrus operator := that assigns values to variables as part of a larger expression. It is useful when matching regular expressions where match objects are needed twice. It can also be used with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop. It can also be used in list comprehensions where a value computed in a filtering condition is also needed in the expression body. The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year. Since then it has been heavily discussed in the Python community with many questioning whether it is a needed improvement. Others are excited as the operator does make the code more readable. One user commented on HN, “The "walrus operator" will occasionally be useful, but I doubt I will find many effective uses for it. Same with the forced positional/keyword arguments and the "self-documenting" f-string expressions. Even when they have a use, it's usually just to save one line of code or a few extra characters.” https://twitter.com/reynoldsnlp/status/1183498971425042433 https://twitter.com/jakevdp/status/1140071525870997504 PEP 570: New function parameter syntax in positional-only parameters Python 3.8 has a new function parameter syntax / to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments. This notation allows pure Python functions to fully emulate behaviors of existing C-coded functions. It can be used to preclude keyword arguments when the parameter name is not helpful. It also allows the parameter name to be changed in the future without the risk of breaking client code. As with PEP 572, this proposal also got mixed reactions from Python developers. In support, one developer said, “Position-only parameters already exist in cpython builtins like range and min. Making their support at the language level would make their existence less confusing and documented.” While others think that this will allow authors to “dictate” how their methods could be used. “Not the biggest fan of this one because it allows library authors to overly dictate how their functions can be used, as in, mark an argument as positional merely because they want to. But cool all the same,” a Redditor commented. PEP 578: Python Audit Hooks and Verified Open Hook Python 3.8 now has an Audit Hook and Verified Open Hook. These hooks allow applications and frameworks written in pure Python code to take advantage of extra notifications. They also allow embedders or system administrators to deploy builds of Python where auditing is always enabled. These are available from Python and native code. PEP 587: New C API to configure the Python Initialization Though Python is highly configurable, its configuration seems scattered all around the code. Python 3.8 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting. This PEP also adds _PyRuntimeState.preconfig (PyPreConfig type) and PyInterpreterState.config (PyConfig type) fields to internal structures. PyInterpreterState.config becomes the new reference configuration, replacing global configuration variables and other private variables. PEP 590: Provisional support for Vectorcall, a fast calling protocol for CPython A currently provisional Vectorcall protocol is added to the Python/C API. It is meant to formalize existing optimizations that were already done for various classes. Any extension type implementing a callable can use this protocol. It will be made fully public in Python 3.9. PEP 574: Pickle protocol 5 supports out-of-band data buffers The pickle protocol 5 now introduces support for out-of-band buffers. This means PEP 3118 compatible data can be transmitted separately from the main pickle stream, at the discretion of the communication layer. Parallel filesystem cache for compiled bytecode files There is a new PYTHONPYCACHEPREFIX setting that configures the implicit bytecode cache to use a separate parallel filesystem tree, rather than the default __pycache__ subdirectories within each source directory. Python uses the same ABI whether it’s built-in release or debug mode With Python 3.8, Python uses the same ABI whether it’s built-in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built-in release mode and C extensions built using the stable ABI. On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. Also, on Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. f-strings now have a = specifier Formatted strings (f-strings) were introduced in Python 3.6 with PEP 498. It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. Python 3.8 adds a = specifier to f-strings for self-documenting expressions and debugging. An f-string such as f'{expr=}' will expand to the text of the expression, an equal sign, then the representation of the evaluated expression. One developer expressed their delight on Hacker News, “F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s String concatenation can become too complex and I have difficulty with large strings.” Another developer said, “The expansion of f-strings is a welcome addition. The more I use them, the happier I am that they exist.” Someone added to this, “This makes clean string interpolation so much easier to do, especially for print statements. It's almost hard to use python < 3.6 now because of them. New metadata module Python 3.8 has a new importlib.metadata module that provides (provisional) support for reading metadata from third-party packages. It can, for instance, extract an installed package’s version number, list of entry points, and more. You can go through other improved modules, language changes, Build and C API changes, API and Feature removals in Python 3.8 on Python docs. For full details, see the changelog. Python 3.8b2 new features: the walrus operator, positional-only parameters, and much more Python 3.8 beta 1 is now ready for you to test Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble. PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3.
Read more
  • 0
  • 0
  • 5391
Banner background image

article-image-reactos-0-4-12-releases-with-kernel-improvements-intel-e1000-nic-driver-support-and-more
Bhagyashree R
25 Sep 2019
2 min read
Save for later

ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more

Bhagyashree R
25 Sep 2019
2 min read
Earlier this week, the ReactOS team announced the release of ReactOS 0.4.12. This release comes with a bunch of kernel improvements, Intel e1000 NIC driver support, font improvements, and more. Key updates in ReactOS 0.4.12 Kernel updates The filesystem infrastructure of ReactOS 0.4.12 has received quite a few improvements to enable Microsoft filesystem drivers. The team has also worked on the common cache module that has deep ties to the memory manager. The team has also improved device power management, fixed support for PXE booting, and overhauled the write-protection functionality. Window snapping ReactOS 0.4.12 comes with support for window snapping. So, users will now be able to align windows to sides or maximize and minimize them by dragging in specific directions. This release also comes with the keyboard shortcuts that accompany this feature. Intel e1000 NIC driver ReactOS 0.4.12 has a new driver to support the Network Interface Card (NIC) out of the box. Now, end-users do not need to manually find and install a driver. This new driver will also be compatible with e1000 NICs. Improvements related to font In ReactOS 0.4.12, font rendering is made more robust and correct. This release fixes a series of problems that badly affected text rendering for buttons in a range of applications,  from iTunes to various .NET applications. User-mode DLLs In this release, the team has made a range of improvements to user-mode components. The common controls (comctl) library is used by most of the Windows applications to draw various generic user interface elements. The team has fixed an issue related to it “reading extremely dryly.” Other updates include drivers for MIDI instruments and animated rotation bar in the startup/shutdown dialog. Check out the official announcement to know more about ReactOS 0.4.12 in detail. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! Btrfs now boots ReactOS, a free and open-source alternative for Windows NT ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes Understanding network port numbers, TCP, UDP, and ICMP on an operating system Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report  
Read more
  • 0
  • 0
  • 2399

article-image-red-hat-announces-centos-stream-a-developer-forward-distribution-jointly-with-the-centos-project
Savia Lobo
25 Sep 2019
3 min read
Save for later

Red Hat announces CentOS Stream, a “developer-forward distribution” jointly with the CentOS Project

Savia Lobo
25 Sep 2019
3 min read
On September 24, just after the much-awaited CentOS 8 was released, the Red Hat community in agreement with the CentOS Project announced a new model into the CentOS Linux community called, CentOS Stream. CentOS Stream is an upstream development platform for ecosystem developers. It is a single, continuous stream of content with updates several times daily, encompassing the latest and greatest from the RHEL codebase. Also, it is like having a view into what the next version of RHEL will look like, available to a much broader community than just a beta or "preview" release. Chris Wright, Red Hat's CTO, says CentOS Stream is it's "a developer-forward distribution that aims to help community members, Red Hat partners, and others take full advantage of open source innovation within a more stable and predictable Linux ecosystem. It is a parallel distribution to existing CentOS." In the previous CentOS releases developers would not know beforehand about the releases in RHEL. As the CentOS Stream project sits between the Fedora Project and RHEL in the RHEL Development process, it will provide a "rolling preview" of future RHEL kernels and features. This enables developers to stay one or two steps ahead of what’s coming in RHEL. “CentOS Stream is parallel to existing CentOS builds; this means that nothing changes for current users of CentOS Linux and services, even those that begin to explore the newly-released CentOS 8. We encourage interested users that want to be more tightly involved in driving the future of enterprise Linux, however, to transition to CentOS Stream as the new "pace-setting" distribution,” The Red Hat blog states. CentOS Stream is part of Red Hat’s broader focus to engage with communities and developers in a way that better aligns with the modern IT world. A user on Hacker News commented, “I like it, at least in theory. I develop some industrial software that runs on RHEL so being able to run somewhat similar distribution on my machine would be convenient. I tried running CentOS but it was too frustrating and limiting to deal with all the outdated packages on a dev machine. I suppose it will also be good for devs who just like the RHEL environment but don't need a super stable, outdated packages.” Another user commented, “I wonder what future Fedora will have if this new CentOS Stream will be stable enough for developer daily driver. 6 month release cycle of Fedora always felt awkwardly in-between, not having the stability of lts nor the continuity of rolling. I guess lot depends on details on how the packages flow to CentOS Stream, do they come from released Fedora versions or rawhide etc.” To know more about CentOS Stream in detail, read Red Hat’s official blog post. After RHEL 8 release, users awaiting the release of CentOS 8 After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side license adoption Red Hat announces the general availability of Red Hat OpenShift Service Mesh Introducing ESPRESSO, an open-source, PyTorch based, end-to-end neural automatic speech recognition (ASR) toolkit for distributed training across GPUs .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 2531

article-image-introducing-weld-a-runtime-written-in-rust-and-llvm-for-cross-library-optimizations
Bhagyashree R
24 Sep 2019
5 min read
Save for later

Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations

Bhagyashree R
24 Sep 2019
5 min read
Weld is an open-source Rust project for improving the performance of data-intensive applications. It is an interface and runtime that can be integrated into existing frameworks including Spark, TensorFlow, Pandas, and NumPy without changing their user-facing APIs. The motivation behind Weld Data analytics applications today often require developers to combine various functions from different libraries and frameworks to accomplish a particular task. For instance, a typical Python ecosystem application selects some data using Spark SQL, transforms it using NumPy and Pandas, and trains a model with TensorFlow. This improves developers’ productivity as they are taking advantage of functions from high-quality libraries. However, these functions are usually optimized in isolation, which is not enough to achieve the best application performance. Weld aims to solve this problem by providing an interface and runtime that can optimize across data-intensive libraries and frameworks while preserving their user-facing APIs. In an interview with Federico Carrone, a Tech Lead at LambdaClass, Weld’s main contributor, Shoumik Palkar shared, “The motivation behind Weld is to provide bare-metal performance for applications that rely on existing high-level APIs such as NumPy and Pandas. The main problem it solves is enabling cross-function and cross-library optimizations that other libraries today don’t provide.” How Weld works Weld serves as a common runtime that allows libraries from different domains like SQL and machine learning to represent their computations in a common functional intermediate representation (IR). This IR is then optimized by a compiler optimizer and JIT’d to efficient machine code for diverse parallel hardware. It performs a wide range of optimizations on the IR including loop fusion, loop tiling, and vectorization. “Weld’s IR is natively parallel, so programs expressed in it can always be trivially parallelized,” said Palkar. When Weld was first introduced it was mainly used for cross-library optimizations. However, over time people have started to use it for other applications as well. It can be used to build JITs or new physical execution engines for databases or analytics frameworks, individual libraries, target new kinds of parallel hardware using the IR, and more. To evaluate Weld’s performance the team integrated it with popular data analytics frameworks including Spark, NumPy, and TensorFlow. This prototype showed up to 30x improvements over the native framework implementations. While cross library optimizations between Pandas and NumPy also improved performance by up to two orders of magnitude. Source: Weld Why Rust and LLVM were chosen for its implementation The first iteration of Weld was implemented in Scala because of its algebraic data types, powerful pattern matching, and large ecosystem. However, it did have some shortcomings. Palkar shared in the interview, “We moved away from Scala because it was too difficult to embed a JVM-based language into other runtimes and languages.” It had a managed runtime, clunky build system, and its JIT compilations were quite slow for larger programs. Because of these shortcomings the team wanted to redesign the JIT compiler, core API, and runtime from the ground up. They were in the search for a language that was fast, safe, didn’t have a managed runtime, provided a rich standard library, functional paradigms, good package manager, and great community background. So, they zeroed-in on Rust that happens to meet all these requirements. Rust provides a very minimal, no setup required runtime. It can be easily embedded into other languages such as Java and Python. To make development easier, it has high-quality packages, known as crates, and functional paradigms such as pattern matching. Lastly, it is backed by a great Rust Community. Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Explaining the reason why they chose LLVM, Palkar said in the interview, “We chose LLVM because its an open-source compiler framework that has wide use and support; we generate LLVM directly instead of C/C++ so we don’t need to rely on the existence of a C compiler, and because it improves compilation times (we don’t need to parse C/C++ code).” In a discussion on Hacker News many users listed other Weld-like projects that developers may find useful. A user commented, “Also worth checking out OmniSci (formerly MapD), which features an LLVM query compiler to gain large speedups executing SQL on both CPU and GPU.” Users also talked about Numba, an open-source JIT compiler that translates Python functions to optimized machine code at runtime with the help of the LLVM compiler library.  “Very bizarre there is no discussion of numba here, which has been around and used widely for many years, achieves faster speedups than this, and also emits an LLVM IR that is likely a much better starting point for developing a “universal” scientific computing IR than doing yet another thing that further complicates it with fairly needless involvement of Rust,” a user added. To know more about Weld, check out the full interview on Medium. Also, watch this RustConf 2019 talk by Shoumik Palkar: https://www.youtube.com/watch?v=AZsgdCEQjFo&t Other news in Programming Darklang available in private beta GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers TextMate 2.0, the text editor for macOS releases  
Read more
  • 0
  • 0
  • 4535
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-how-quarkus-brings-java-into-the-modern-world-of-enterprise-tech
Guest Contributor
22 Sep 2019
6 min read
Save for later

How Quarkus brings Java into the modern world of enterprise tech

Guest Contributor
22 Sep 2019
6 min read
What is old is new again, even - and maybe especially - in the world of technology. To name a few milestones that are being celebrated this year: Java is roughly 25 years old, it is the 10th anniversary of Minecraft, and Nintendo is back in vogue. None of these three examples are showing signs of slowing down anytime soon. I would argue that they are continuing to be leaders in innovation because of the simple fact that there are still people behind them that are creatively breathing new life into what otherwise could have been “been there, done that” technologies. With Java, in particular, it is so widely used, that from an enterprise efficiency perspective, it simply does not make sense NOT to have Java be a key language in the development of emerging tech apps. In fact, more and more apps are being developed with a Java-first approach. But, how can this be done, especially when apps are being built using new architectures like serverless and microservices? One technology that shows promise is Quarkus, a newly introduced Kubernetes-native platform that addresses many of the barriers hindering Java’s ability to shine in the modern world of emerging tech. Why does Java still matter Even though its continued relevance has been questioned for years, I believe Java still matters and is not likely to go away anytime soon. This is because of two reasons. First, there is a  whole list of programming languages that are based on Java and the Java Virtual Machine (JVM), such as Kotlin, Groovy, Clojure and JRuby. Also, Java continues to be one of the most popular programming languages for Android apps, as well as for the development of edge devices and the internet of things. In fact, according to SlashData’s State of the Developer Nation Q4 2018 report, there are 7.6 million active Java developers worldwide. Other factors that I think are contributing to Java’s continued popularity include network portability, the fact that it is object-oriented, converts data to bytecode so that it can be read and run on any platform which has a JVM installed, and, maybe most importantly, has a syntax similar to C++, making it a relatively easy language for developers to learn. Additionally, SlashData’s research suggested that newer and niche languages do not seem to be adding many new developers, if any, per year, begging the question of whether or not it is easy for newer languages to scale beyond their niche and become the next big thing. It also makes it clear that while there is value for newer programming languages that do not serve as wide a purpose, they may not be able to or need to overtake languages like Java. In fact, the success of Java relies on the entire ecosystem surrounding it, including the editors, third party libraries, CI/CD pipelines, and systems. Each aspect of the ecosystem is something that is so easy to take for granted in established languages but are things that need to be created from scratch in new languages if they want to catch up to or overtake Java. How Quarkus brings Java into modern enterprise tech Quarkus is more than just a cool name. It is a Kubernetes Native Java framework that is tailored for GraalVM and HotSpot, and crafted by best-of-breed Java libraries and standards. The overall goal of Quarkus is to make Java one of the leading platforms in Kubernetes and serverless environments, while also enabling developers to work within what they know and in a reactive and imperative programming model. Put simply, Quarkus works to bring Java into the modern microservices and serverless modes of developing. This is important because Java continues to be a top programming language for back-end enterprise developers. Many organizations have tied both time and money into Java, which has been a dominant force in the development landscape for a number of years. As enterprises increasingly shift toward cloud computing, it is important for Java to carry over into these new programming methods. Why a “Java First” approach Java has been a top programming language for enterprises for over a decade. We should not lose sight of that fact, and that there are many developers with excellent Java skills, as well as existing applications that run on Java. Furthermore, because Java has been around so long it has not only matured as a language but also as an ecosystem. There are editors, logging systems, debuggers, build systems, unit testing environments, QA testing environments, and more--all tuned for Java, if not also implemented in Java. Therefore, when starting a new Java application it can be easier to find third-party components or entire systems that can help the developer gain productivity advancements over other languages that have not yet grown to have the breadth and depth of the Java ecosystem. Using a full-stack framework such as Quarkus, and taking advantage of libraries that use Java, such as Eclipse MicroProfile and Eclipse Vert.x, makes this easier, and also encourages the use of different combinations of tools and dependencies. With Quarkus in particular, it also includes an extension framework that third party authors can use to build native executables and expand the functionality of Java in the enterprise. Quarkus not only brings Java into the modern world of containers, but it also does so quickly with short start-up times. Java is not looking like it will go away anytime soon. Between the number of developers who still use Java as their first language and the number of apps that run almost entirely from it, Java’s take in the game is as solid as ever. Through new tools like Quarkus, it can continue to evolve in the modern app dev world. Author Bio Mark Little works at RedHat where he leads the JBoss Technical Direction and research & development. Prior to this, he was SOA Technical Development Manager and Director of Standards. He also has experience with two successful startup companies. Other interesting news in Tech Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data and Society report. Open AI researchers advance multi-agent competition by training AI agents in a hide and seek environment. France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 4741

article-image-is-scala-3-0-a-new-language-all-together-martin-odersky-its-designer-says-yes-and-no
Bhagyashree R
10 Sep 2019
6 min read
Save for later

Is Scala 3.0 a new language altogether? Martin Odersky, its designer, says “yes and no”

Bhagyashree R
10 Sep 2019
6 min read
At Scala Days Lausanne 2019 in July, Martin Odersky, the lead designer of Scala, gave a tour of the upcoming major version, Scala 3.0. He talked about the roadmap to Scala 3.0, its new features, how its situation is different from Python 2 vs 3, and much more. Roadmap to Scala 3.0 Odersky announced that “Scala 3.0 has almost arrived” since all the features are fleshed out, with implementations in the latest releases of Dotty, the next generation compiler for Scala. The team plans to go into feature freeze and release Scala 3.0 M1 in fall this year. Following that the team will focus on stabilization, complete SIP process, and write specs and user docs. They will also work on community build, compatibility, and migration tasks. All these tasks will take about a year, so we can expect Scala 3.0 release in fall 2020. Scala 2.13 was released in June this year. It was shipped with redesigned collections, updated futures implementation, language changes including literal types, partial unification on by default, by-name implicits, macro annotations, among others. The team is also working simultaneously on its next release, Scala 2.14. Its main focus will be to ease out the migration process from Scala 2 to 3 by defining migration tools, shim libraries, targeted deprecations, and more. What’s new in this major release There is a whole lot of improvements coming in Scala 3.0, some of which Odersky discussed in his talk: Scala will drop the ‘new’ keyword: Starting with Scala 3.0, you will be able to omit ‘new’ from almost all instance creations. With this change, developers will no longer have to define a case class just to get nice constructor calls. Also, this will prevent accidental infinite loops in the cases when ‘apply’ has the same arguments as the constructor. Top-level definitions: In Scala 3.0, top-level definitions will be added as a replacement for package objects. This is because only one package object definition is allowed per package. Also, a trait or class defined in the package object is different from the one defined in the package, which can lead to unexpected behavior. Redesigned enumeration support: Previously, Scala did not provide a very straightforward way to define enums. With this update, developers will have a simple way to define new types with a finite number of values or constructions. They will also be able to add parameters and define fields and methods. Union types: In previous versions, union types can be defined with the help of constructs such as Either or subtyping hierarchies, but these constructs are bulkier. Adding union types to the language will fix Scala’s least upper bounds problem and provide added modelling power. Extension methods: With extension methods, you can define methods that can be used infix without any boilerplate. These will essentially replace implicit classes. Delegates: Implicit is a “bedrock of programming” in Scala. However, they suffer from several limitations. Odersky calls implicit conversions “recipe for disaster” because they tend to interact very badly with each other and add too much implicitness. Delegates will be their simpler and safer alternative. Functions everywhere: In Scala, functions and methods are two different things. While methods are members of classes and objects, functions are objects themselves. Until now, methods were quite powerful as compared to functions. They are defined by properties like dependent, polymorphic, and implicit. With Scala 3.0, these properties will be associated with functions as well. Recent discussions regarding the updates in Scala 3.0 A minimal alternative for scala-reflect and TypeTag Scala 3.0 will drop support for ‘scala-reflect’ and ‘TypeTag’. Also, there hasn't been much discussion about its alternative. However, some developers believe that it is an important feature and is currently in use by many projects including Spark and doobie. Explaining the reason behind dropping the support, a SIP committee member, wrote on the discussion forum, “The goal in Scala 3 is what we had before scala-reflect. For use-cases where you only need an 80% solution, you should be able to accomplish that with straight-up Java reflection. If you need more, TASTY can provide you the basics. However, we don’t think a 100% solution is something folks need, and it’s unclear if there should be a “core” implementation that is not 100%.” Odersky shared that Scala 3.0 has quoted.Type as an alternative to TypeTag. He commented, “Scala 3 has the quoted package, with quoted.Expr as a representation of expressions and quoted.Type as a representation of types. quoted.Type essentially replaces TypeTag. It does not have the same API but has similar functionality. It should be easier to use since it integrates well with quoted terms and pattern matching.” Follow the discussion on Scala Contributors. Python-style indentation (for now experimental) Last month, Odersky proposed to bring indentation based syntax in Scala while also supporting the brace-based. This is because when it was first created, most of the languages used braces. However, with time indentation-based syntax has actually become the conventional syntax. Listing the reasons behind this change, Odersky wrote, The most widely taught language is now (or will be soon, in any case) Python, which is indentation based. Other popular functional languages are also indentation based (e..g Haskell, F#, Elm, Agda, Idris). Documentation and configuration files have shifted from HTML and XML to markdown and yaml, which are both indentation based. So by now indentation is very natural, even obvious, to developers. There's a chance that anything else will increasingly be considered "crufty" Odersky on whether Scala 3.0 is a new language Odersky answers this with both yes and no. Yes, because Scala 3.0 will include several language changes including feature removals. The introduced new constructs will improve user experience and on-boarding dramatically. There will also be a need to rewrite current Scala books to reflect the recent developments. No, because it will still be Scala and all core constructs will still be the same. He concludes, "Between yes and no, I think the fairest answer is to say it is really a process. Scala 3 keeps most constructs of Scala 2.13, alongside the new ones. Some constructs like old implicits and so on will be phased out in the 3.x release train. So, that requires some temporary duplication in the language, but the end result should be a more compact and regular language.” Comparing with Python 2 and 3, Odersky believes that Scala's situation is better because of static typing and binary compatibility. The current version of Dotty can be linked with Scala 2.12 or 2.13 files. He shared that in the future, it will be possible to have a Dotty library module that can then be used by both Scala 2 and 3 modules. Read also: Core Python team confirms sunsetting Python 2 on January 1, 2020 Watch Odersky’s talk to know more in detail. https://www.youtube.com/watch?v=_Rnrx2lo9cw&list=PLLMLOC3WM2r460iOm_Hx1lk6NkZb8Pj6A Other news in programming Implementing memory management with Golang’s garbage collector Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?  
Read more
  • 0
  • 0
  • 11670

article-image-developers-from-the-swift-for-tensorflow-project-propose-adding-first-class-differentiable-programming-to-swift
Bhagyashree R
09 Sep 2019
5 min read
Save for later

Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift

Bhagyashree R
09 Sep 2019
5 min read
After working for over 1.5 years on the Differentiable Programming Mega-Proposal, Richard Wei, a developer at Google Brain, and his team submitted the proposal on the Swift Evolution forum on Thursday last week. This proposal aims to “push Swift's capabilities to the next level in numerics and machine learning” by introducing differentiable programming as a new language feature in Swift. It is a part of the Swift for TensorFlow project under which the team is integrating TensorFlow directly into the language to offer developers a next-generation platform for machine learning. What is differentiable programming With the increasing sophistication in deep learning models and the introduction of modern deep learning frameworks, many researchers have started to realize that building neural networks is very similar to programming. Yann LeCun, VP and Chief AI Scientist at Facebook, calls differentiable programming “a little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.” He compares it with regular programming, with the only difference that the resulting programs are “parameterized, automatically differentiated, and trainable/optimizable.” Many also say that differentiable programming is a different name for automatic differentiation, a collection of techniques to numerically evaluate the derivative of a function. It can be seen as a new programming paradigm in which programs can be differentiated throughout. Check out the paper “Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagation” to get a better understanding of differentiable programming. Why differential programming is proposed in Swift Swift is an expressive, high-performance language, which makes it a perfect candidate for numerical applications. According to the proposal authors, first-class support for differentiable programming in Swift will allow safe and powerful machine learning development. The authors also believe that this is a “big step towards high-level numerical computing support.” With this proposal, they aim to make Swift a “real contender in the numerical computing and machine learning landscape.” Here are some of the advantages of adding first-class support for differentiable programming in Swift: Better language coverage: First-class differentiable programming support will enable differentiation to work smoothly with other Swift features. This will allow developers to code normally without being restricted to a subset of Swift. Enable extensibility: This will provide developers an “extensible differentiable programming system.” They will be able to create custom differentiation APIs by leveraging primitive operators defined in the standard library and supported by the type system. Static warnings and errors: This will enable the compiler to statically identify the functions that cannot be differentiated or will give a zero derivative. It will then be able to give a non-differentiability error or warning. This will improve productivity by making common runtime errors in machine learning directly debuggable without library boundaries. Some of the components that will be added in Swift under this proposal are: The Differentiable protocol: This is a standard library protocol that will generalize all data structures that can be a parameter or result of a differentiable function. The @differentiable declaration attribute: This will be used to mark all the function-like declarations as differentiable. The @differentiable function types: This is a subtype of normal function types with a different runtime representation and calling convention. Differentiable function types will have differentiable parameters and results. Differential operators: These are the core differentiation APIs that take ‘@differentiable’ functions as inputs and return derivative functions or compute derivative values. @differentiating and @transposing attributes: These attributes are for declaring custom derivative function for some other function declaration. This proposal sparked a discussion on Hacker News. Many developers were excited about bringing differentiable programming support in the Swift core. A user commented, “This is actually huge. I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language. This may let them capture a large chunk of the ML market from Python - and hopefully, greatly improve ML APIs while they're at it.” Some felt that a library could have served the purpose. “I don't see why a well-written library could not serve the same purpose. It seems like a lot of cruft. I doubt, for example, Python would ever consider adding this and it's the de facto language that would benefit the most from something like this - due to the existing tools and communities. It just seems so narrow and not at the same level of abstraction that languages typically sit at. I could see the language supporting higher-level functionality so a library could do this without a bunch of extra work (such as by some reflection),” a user added. Users also discussed another effort that goes along the lines of this project: Julia Zygote, which is a working prototype for source-to-source automatic differentiation. A user commented, “Yup, work is continuing apace with Julia’s next-gen Zygote project. Also, from the GP’s thought about applications beyond DL, my favorite examples so far are for model-based RL and Neural ODEs.” To know more in detail, check out the proposal: Differentiable Programming Mega-Proposal. Other news in programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 4344

article-image-go-1-13-releases-with-error-wrapping-tls-1-3-enabled-by-default-improved-number-literals-and-more
Bhagyashree R
04 Sep 2019
5 min read
Save for later

Go 1.13 releases with error wrapping, TLS 1.3 enabled by default, improved number literals, and more

Bhagyashree R
04 Sep 2019
5 min read
Yesterday, the Go team announced the release of Go 1.13. This version comes with uniform and modernized number literals, support for error wrapping, TLS 1.3 on by default, improved modules support, and much more. Also, the go command now uses module mirror and checksum database by default. Want to learn Go? Get started or get up to date with our new edition of Mastering Go. Learn more. https://twitter.com/golang/status/1168966214108033029   Key updates in Go 1.13 TLS 1.3 enabled by default Go 1.13 comes with Transportation Layer Security (TLS) 1.3 support in the crypto/tls package enabled by default. If you wish to disable it, add tls13=0 to the GODEBUG environment variable. The team shared that the option to opt-out will be removed in Go 1.14. Uniform and modernized number literal prefixes Go 1.13 introduces optional prefixes for binary and octal integer literals, hexadecimal floating-point literals, and imaginary literals. The 0b or 0B prefix will indicate a binary integer literal, for instance, ‘0b1011.’ The 0o or 0O prefix will indicate an octal integer literal such as 0o660. The 0 prefix used in previous versions is also valid. The 0x or 0X prefix will be used to represent the mantissa of a floating-point number in hexadecimal format such as 0x1.0p-1021. The imaginary suffix (i) can be used with any integer or floating-point literal. To improve readability, the digits of any number literal can now be separated using underscores, for instance,  1_000_000, 0b_1010_0110. It can also be used between the literal prefix and the first digit. Read also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Go 1.13 features module mirror to download modules Starting with Go 1.13, the go command will download and authenticate modules using the module mirror and checksum database by default. A module mirror is a module proxy that fetches modules from origin servers and then caches them for use in future requests. This enables the mirror to serve the source code even when the origin servers are down. The go command will use the module mirror maintained by the Go team, which is served at https://proxy.golang.org. In case you are using an earlier version of the go command, you can use this service by setting ‘GOPROXY=https://proxy.golang.org’ in your local environment. Checksum database to authenticate modules The go command maintains two Go modules files called go.mod and go.sum. Introduced in version 1.11, Go modules are an alternative to GOPATH with support for versioning and package distribution. They are basically a collection of Go packages stored in a file with a go.mod file at its root. The go.sum file consists of SHA-256 hashes of the source code that the go command can use to identify any misbehavior by an origin server or proxy. However, the drawback of this method is that it “works entirely by the trust on your first use.” When a version of a dependency is added for the first time, the go command will fetch the code and add lines to go.sum file on the fly. “The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you,” the team explains. To solve this the Go team has come up with a global source of go.sum lines which they call a checksum database. This will ensure that the go command always adds the same lines to everyone’s go.sum file. So, whenever the go command receives new source code, it can verify its hash against the global database, which is served by sum.golang.org. Read also: Golang 1.13 module mirror, index, and Checksum database are now production-ready Support for error wrapping As per the error value proposal, Go 1.13 comes with support for error wrapping. This provides a standard way to wrap errors to the standard library. Now, an error can wrap another error by defining an Unwrap method that returns the wrapped error. Explaining the working, the team wrote, “An error e can wrap another error w by providing an Unwrap method that returns w. Both e and w are available to programs, allowing e to provide additional context to w or to reinterpret it while still allowing programs to make decisions based on w.” To support this behavior, the fmt.Errorf function now has a new %w verb for creating wrapped errors. There are also three new functions in the errors package namely errors.Unwrap, errors.Is and errors.As to simplify unwrapping and inspecting wrapped errors. These were some of the updates in Go 1.13. To read the entire list of features, check out its official release notes. What’s new in programming languages The Julia team shares its finalized release process with the community Introducing Nushell: A Rust-based shell TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more
Read more
  • 0
  • 0
  • 3576
article-image-implementing-memory-management-with-golang-garbage-collector
Packt Editorial Staff
03 Sep 2019
10 min read
Save for later

Implementing memory management with Golang's garbage collector

Packt Editorial Staff
03 Sep 2019
10 min read
Did you ever think of how bulk messages are pushed in real-time that fast? How is it possible? Low latency garbage collector (GC) plays an important role in this. In this article, we present ways to look at certain parameters to implement memory management with the Golang GC. Garbage collection is the process of freeing up memory space that is not being used. In other words, the GC sees which objects are out of scope and cannot be referenced anymore and frees the memory space they consume. This process happens in a concurrent way while a Go program is running and not before or after the execution of the program. This article is an excerpt from the book Mastering Go - Third Edition by Mihalis Tsoukalos. Mihalis runs through the nuances of Go, with deep guides to types and structures, packages, concurrency, network programming, compiler design, optimization, and more.  Implementing the Golang GC The Go standard library offers functions that allow you to study the operation of the GC and learn more about what the GC does secretly. These functions are illustrated in the gColl.go utility. The source code of gColl.go is presented here in chunks. Package main import (    "fmt"    "runtime"    "time" ) You need the runtime package because it allows you to obtain information about the Go runtime system, which, among other things, includes the operation of the GC. func printStats(mem runtime.MemStats) { runtime.ReadMemStats(&mem) fmt.Println("mem.Alloc:", mem.Alloc) fmt.Println("mem.TotalAlloc:", mem.TotalAlloc) fmt.Println("mem.HeapAlloc:", mem.HeapAlloc) fmt.Println("mem.NumGC:", mem.NumGC, "\n") } The purpose of the printStats() function is to avoid writing the same Go code all the time. The runtime.ReadMemStats() call gets the latest garbage collection statistics for you. func main() {    var mem runtime.MemStats    printStats(mem)    for i := 0; i < 10; i++ { // Allocating 50,000,000 bytes        s := make([]byte, 50000000)        if s == nil {            fmt.Println("Operation failed!")          }    }    printStats(mem) In this part, we have a for loop that creates 10-byte slices with 50,000,000 bytes each. The reason for this is that by allocating large amounts of memory, we can trigger the GC. for i := 0; i < 10; i++ { // Allocating 100,000,000 bytes      s := make([]byte, 100000000)       if s == nil {           fmt.Println("Operation failed!")       }       time.Sleep(5 * time.Second)   } printStats(mem) } The last part of the program makes even bigger memory allocations – this time, each byte slice has 100,000,000 bytes. Running gColl.go on a macOS Big Sur machine with 24 GB of RAM produces the following kind of output: $ go run gColl.go mem.Alloc: 124616 mem.TotalAlloc: 124616 mem.HeapAlloc: 124616 mem.NumGC: 0 mem.Alloc: 50124368 mem.TotalAlloc: 500175120 mem.HeapAlloc: 50124368 mem.NumGC: 9 mem.Alloc: 122536 mem.TotalAlloc: 1500257968 mem.HeapAlloc: 122536 mem.NumGC: 19 The value of mem.Alloc is the bytes of allocated heap objects — allocated are all the objects that the GC has not yet freed. mem.TotalAlloc shows the cumulative bytes allocated for heap objects—this number does not decrease when objects are freed, which means that it keeps increasing. Therefore, it shows the total number of bytes allocated for heap objects during program execution. mem.HeapAlloc is the same as mem.Alloc. Last, mem.NumGC shows the total number of completed garbage collection cycles. The bigger that value is, the more you have to consider how you allocate memory in your code and if there is a way to optimize that. If you want even more verbose output regarding the operation of the GC, you can combine go run gColl.go with GODEBUG=gctrace=1. Apart from the regular program output, you get some extra metrics—this is illustrated in the following output: $ GODEBUG=gctrace=1 go run gColl.go gc 1 @0.021s 0%: 0.020+0.32+0.015 ms clock, 0.16+0.17/0.33/0.22+0.12 ms cpu, 4->4->0 MB, 5 MB goal, 8 P gc 2 @0.041s 0%: 0.074+0.32+0.003 ms clock, 0.59+0.087/0.37/0.45+0.030 ms cpu, 4->4->0 MB, 5 MB goal, 8 P . . . gc 18 @40.152s 0%: 0.065+0.14+0.013 ms clock, 0.52+0/0.12/0.042+0.10 ms cpu, 95->95->0 MB, 96 MB goal, 8 P gc 19 @45.160s 0%: 0.028+0.12+0.003 ms clock, 0.22+0/0.13/0.081+0.028 ms cpu, 95->95->0 MB, 96 MB goal, 8 P mem.Alloc: 120672 mem.TotalAlloc: 1500256376 mem.HeapAlloc: 120672 mem.NumGC: 19 Now, let us explain the 95->95->0 MB triplet in the previous line of output. The first value (95) is the heap size when the GC is about to run. The second value (95) is the heap size when the GC ends its operation. The last value is the size of the live heap (0). Go garbage collection is based on the tricolor algorithm The operation of the Go GC is based on the tricolor algorithm, which is the subject of this subsection. Note that the tricolor algorithm is not unique to Go and can be used in other programming languages as well. Strictly speaking, the official name for the algorithm used in Go is the tricolor mark-and-sweep algorithm. It can work concurrently with the program and uses a write barrier. This means that when a Go program runs, the Go scheduler is responsible for the scheduling of the application and the GC. This is as if the Go scheduler has to deal with a regular application with multiple goroutines! The core idea behind this algorithm came from Edsger W. Dijkstra, Leslie Lamport, A. J. Martin, C. S. Scholten, and E. F. M. Steffens and was first illustrated in a paper named On-the-Fly Garbage Collection: An Exercise in Cooperation. The primary principle behind the tricolor mark-and-sweep algorithm is that it divides the objects of the heap into three different sets according to their color, which is assigned by the algorithm. It is now time to talk about the meaning of each color set. The objects of the black set are guaranteed to have no pointers to any object of the white set. However, an object of the white set can have a pointer to an object of the black set because this has no effect on the operation of the GC. The objects of the gray set might have pointers to some objects of the white set. Finally, the objects of the white set are the candidates for garbage collection. So, when the garbage collection begins, all objects are white, and the GC visits all the root objects and colors them gray. The roots are the objects that can be directly accessed by the application, which includes global variables and other things on the stack. These objects mostly depend on the Go code of a program. After that, the GC picks a gray object, makes it black, and starts looking at whether that object has pointers to other objects of the white set or not. Therefore, when an object of the gray set is scanned for pointers to other objects, it is colored black. If that scan discovers that this particular object has one or more pointers to a white object, it puts that white object in the gray set. This process keeps going for as long as objects exist in the gray set. After that, the objects in the white set are unreachable and their memory space can be reused. Therefore, at this point, the elements of the white set are said to be garbage collected. Please note that no object can go directly from the black set to the white set, which allows the algorithm to operate and be able to clear the objects on the white set. As mentioned before, no object of the black set can directly point to an object of the white set. Additionally, if an object of the gray set becomes unreachable at some point in a garbage collection cycle, it will not be collected at this garbage collection cycle but in the next one! Although this is not an optimal situation, it is not that bad. During this process, the running application is called the mutator. The mutator runs a small function named write barrier that is executed each time a pointer in the heap is modified. If the pointer of an object in the heap is modified, which means that this object is now reachable, the write barrier colors it gray and puts it in the gray set. The mutator is responsible for the invariant that no element of the black set has a pointer to an element of the white set. This is accomplished with the help of the write barrier function. Failing to accomplish this invariant will ruin the garbage collection process and will most likely crash your program in a pretty bad and undesirable way! So, there are three different colors: black, white, and gray. When the algorithm begins, all objects are colored white. As the algorithm keeps going, white objects are moved into one of the other two sets. The objects that are left in the white set are the ones that are going to be cleared at some point. The next figure displays the three color sets with objects in them. Figure 1: The Go GC represents the heap of a program as a graph In the presented graph, you can see that while object E, which is in the white set, can access object F, it cannot be accessed by any other object because no other object points to object E, which makes it a perfect candidate for garbage collection! Additionally, objects A, B, and C are root objects and are always reachable; therefore, they cannot be garbage collected. Graph comprehended Can you guess what will happen next in that graph? Well, it is not that difficult to realize that the algorithm will have to process the remaining elements of the gray set, which means that both objects A and F will go to the black set. Object A will go to the black set because it is a root element and F will go to the black set because it does not point to any other object while it is in the gray set. After object A is garbage collected, object F will become unreachable and will be garbage collected in the next cycle of the GC because an unreachable object cannot magically become reachable in the next iteration of the garbage collection cycle. Note: The Go garbage collection can also be applied to variables such as channels. When the GC finds out that a channel is unreachable, that is when the channel variable cannot be accessed anymore, it will free its resources even if the channel has not been closed. Go allows you to manually initiate a garbage collection by putting a runtime.GC() statement in your Go code. However, have in mind that runtime.GC() will block the caller and it might block the entire program, especially if you are running a very busy Go program with many objects. This mainly happens because you cannot perform garbage collections while everything else is rapidly changing, as this will not give the GC the opportunity to clearly identify the members of the white, black, and gray sets. This garbage collection status is also called garbage collection safe-point. You can find the long and relatively advanced Go code of the GC at https://github.com/golang/go/blob/master/src/runtime/mgc.go, which you can study if you want to learn even more information about the garbage collection operation. You can even make changes to that code if you are brave enough! Understanding Go Internals: defer, panic() and recover() functions [Tutorial] Implementing hashing algorithms in Golang [Tutorial] Is Golang truly community driven and does it really matter?
Read more
  • 0
  • 0
  • 50878

article-image-rust-is-the-future-of-systems-programming-c-is-the-new-assembly-intel-principal-engineer-josh-triplett
Bhagyashree R
27 Aug 2019
10 min read
Save for later

“Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett

Bhagyashree R
27 Aug 2019
10 min read
At Open Source Technology Summit (OSTS) 2019, Josh Triplett, a Principal Engineer at Intel gave an insight into what Intel is contributing to bring the most loved language, Rust to full parity with C. In his talk titled Intel and Rust: the Future of Systems Programming, he also spoke about the history of systems programming, how C became the “default” systems programming language, what features of Rust gives it an edge over C, and much more. Until now, OSTS was Intel's closed event where the company's business and tech leaders come together to discuss the various trends, technologies, and innovations that will help shape the open-source ecosystem. However, this year was different as the company welcomed non-Intel attendees including media, partners, and developers for the first time. The event hosts keynotes, more than 50 technical sessions, panels, demos covering all the open source technologies Intel is involved in. These include integrated software stacks (edge, AI, infrastructure), firmware, embedded and IoT projects, and cloud system software. This year the event happened from May 14-16 at Stevenson, Washington. What is systems programming Systems programming is the development and management of software that serves as a platform for other software to be built upon. The system software also directly or closely interfaces with computer hardware in order to gain necessary performance and expose abstractions. Unlike application programming where software is created to provide services to the user, it aims to produce software that provides services to the computer hardware. Triplett broadly defines systems programming as “anything that isn't an app.” It includes things like BIOS, firmware, boot loaders, operating systems kernels, embedded and similar types of low-level code, virtual machine implementations. Triplett also counts a web browser as a system software as it is more than “just an app,” they are actually “platforms for websites and web apps,” he says. How C became the “default” systems programming language Previously, most system software including BIOS, boot loaders, and firmware were written in Assembly. In the 1960s, experiments to bring hardware support in high-level languages started, which resulted in the creation of languages such as PL/S, BLISS, BCPL, and extended ALGOL. Then in the 1970s, Dennis Ritchie created the C programming language for the Unix operating system. Derived from the typeless B programming language, C was packed with powerful high-level functionalities and detailed features that were best suited for writing an operating system. Several UNIX components including its kernel were eventually rewritten in C. Many other system software including the Oracle database, a large portion of Windows source code, Linux operating system, were all written in C. C was seeing a huge adoption at this point. But, what exactly made developers comfortable moving to C? Triplett believes that in order to make this move from one language to another, developers have to be comfortable in terms of two things: features and parity. First, the language should offer “sufficiently compelling” features. “It can’t just be a little bit better. It has to be substantially better to warrant the effort and engineering time needed to move,” he adds. As compared to Assembly, C had a lot to offer. It had some degree of type safety, provided portability, better productivity with high-level constructs, and much more readable code. Second, the language has to provide parity, which means developers had to be confident that it is no less capable than Assembly. He states, “It can’t just be better, it also has to be no worse.” In addition to being faster and expressing any type of data that Assembly was able to, it also had what Triplett calls “escape hatch.”  This means you are allowed to make the move incrementally and also combine Assembly if required. Triplett believes that C is now becoming what Assembly was years ago. “C is the new Assembly,” he concludes. Developers are looking for a high-level language that not only addresses the problems in C that can’t be fixed but also leverage other exciting features that these languages provide. Such a language that aims to be compelling enough to make developers move from C should be memory safe, provide automatic memory management,  security, and much more. “Any language that wants to be better than C has to offer a lot more than just protection from buffer overflows if it's actually going to be a compelling alternative. People care about usability and productivity. They care about writing code that is self-explanatory, which accomplishes more work in less code. It also needs to address security issues. Usability and productivity go hand in hand with security. The less code you need to write to accomplish something, the less chance you have of introducing bugs security bugs or otherwise,” he explains. Comparing Rust with C Back in 2006, Graydon Hoare, a Mozilla employee started writing Rust as a personal project. Mozilla, in 2009, started sponsoring the project and also expanded the team to drive further development of the language. One of the reasons why Mozilla got interested is that Firefox was written in more than 4 million lines of C++ code and had quite a bit of highly critical vulnerabilities. Rust was built with safety and concurrency in mind making it the perfect choice for rewriting many components of Firefox under Project Quantum. It is also using Rust to develop Servo, an HTML rendering engine that will eventually replace Firefox’s rendering engine. Many other companies have also started using Rust for their projects including Microsoft, Google, Facebook, Amazon, Dropbox, Fastly, Chef, Baidu, and more. Rust addresses the memory management problem in C. It offers automatic memory management so that developers do not have to manually call free on every object. What sets it apart from other modern languages is that it does not have a garbage collector or runtime system of any kind. Rust instead has the concepts of ownership, borrowing, references, and lifetimes. “Rust has a system of declaring whether any given use of an object is the owner of that object or whether it's just borrowing that object temporarily. If you're just borrowing an object the compiler will keep track of that. It'll make sure that the original sticks around as long as you reference it. Rust makes sure that the owner of the object frees it when it's done and it inserts the call to free at compile time with no extra runtime overhead,” Triplett explains. Not having a runtime is also a plus for Rust. Triplett believes that languages that have a runtime are difficult to use as a system programming language. He adds, “You have to initialize that runtime before you can call any code, you have to use that runtime to call functions, and the runtime itself might run extra code behind your back at unexpected times.” Rust also aims to provide safe concurrent programming. The same features that make it memory safe, keep track of things like which thread own which object, which objects can be passed between threads, and which objects require acquiring locks. These features make Rust compelling enough for developers to choose for systems programming. However, talking about the second criteria, Rust does not have enough parity with C yet. “Achieving parity with C is exactly what got me involved in Rust,” says Triplett Teaching Rust about C compatible unions Triplett's first contribution to the Rust programming language was in the form of the 1444 RFC, which was started in 2015 and got accepted in 2016. This RFC proposed to bring native support for C-compatible unions in Rust that would be defined via a new "contextual keyword" union. Triplett understood the need for this proposal when he wanted to build a virtual machine in Rust and the Linux kernel interface for that /dev/kvm required unions. "I worked with the Rust community and with the language team to get unions into Rust and because of that work I'm actually now part of the Rust language governance team helping to evaluate and guide other changes into the language," he adds. He talked about this RFC in much detail at the very first RustConf in 2016: https://www.youtube.com/watch?v=U8Gl3RTXf88 Support for unnamed struct and union types Another feature that Triplett worked on was the support for unnamed struct and union types in Rust. This has been a widespread C compiler extension for decades and was also included in the C11 standard. This allowed developers to group and layout fields in arbitrary ways to match C data structures used in the Foreign Function Interface (FFI). With this proposal implemented, Rust will be able to represent such types using the same names as the structures without interposing artificial field names that will confuse users of well-established interfaces from existing platforms. A stabilized support for inline Assembly in Rust Systems programming often involves low-level manipulations and requires low-level details of the processors such as privileged instructions. For this, Rust supports using inline Assembly via the ‘asm!’ macro. However, it is only present in the nightly compiler and not yet stabilized. Triplett in a collaboration with other Rust developers is writing a proposal to introduce more robust syntax for inline Assembly. To know more in detail about support for inline Assembly, check out this pre-RFC. BFLOAT16 support into Rust Many Intel processors including Xeon Scalable ‘Cooper Lake-SP’ now support BFLOAT16, a new floating-point format. This truncated 16-bit version of the 32-bit IEEE 754 single-precision floating-point format was mainly designed for deep learning. This format is also used in machine learning libraries like Tensorflow that work with huge datasets. It also makes interoperating with existing systems, functions, and storage much easier. This is why Triplett is working on adding support for BFLOAT16 in Rust so that developers would be able to use the full capabilities of their hardware. FFI/C Parity Working Group This was one of the important announcements that Triplett made. He is starting a working group that will focus on achieving full parity with C. Under this group, he aims to collaborate with both the Rust community and other Intel developers to develop the specifications for the remaining features that need to be implemented in Rust for system programming. This group will also focus on bringing support for systems programming using the stable releases of Rust, not just experimental nightly releases of the compiler. In last week’s Reddit discussion, Triplett shared the current status of the working group, “To pre-answer, one question: the FFI / C Parity working group is in the process of being launched, and hasn't quite kicked off yet. I'll be posting about it here and elsewhere when it is, along with the initial goals.” Watch Josh Triplett’s full OSTS talk to know more about Intel’s contribution to Rust: https://www.youtube.com/watch?v=l9hM0h6IQDo [box type="shadow" align="" class="" width=""]Update: We have made the following corrections based on feedback from Josh Triplett: This year OSTS was open to Intel's partners and press. Previously, the article read 'escape patch', but it is 'escape hatch.' RFC 1444 wasn't last year, it was started in 2015 and accepted in 2016. 'dev KVM' is now corrected to '/dev/kvm'[/box] AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity
Read more
  • 0
  • 0
  • 21060

article-image-mozilla-proposes-webassembly-interface-types-to-enable-language-interoperability
Bhagyashree R
23 Aug 2019
4 min read
Save for later

Mozilla proposes WebAssembly Interface Types to enable language interoperability

Bhagyashree R
23 Aug 2019
4 min read
WebAssembly will soon be able to use the same high-level types in Python, Rust, and Node says Lin Clark, a Principal Research Engineer at Mozilla, with the help of a new proposal: WebAssembly Interface Types. This proposal aims to add a new set of interface types that will describe high-level values like strings, sequences, records, and variants in WebAssembly. https://twitter.com/linclark/status/1164206550010884096 Why WebAssembly Interface Type matters Mozilla and many other companies have been putting their efforts into bringing WebAssembly outside the browser with projects like WASI and Fastly’s Lucet. Developers also want to run WebAssembly from different source languages like Python, Ruby, and Rust. Clark believes there are three reasons why developers want to do that. First, this will allow them to easily use native modules and deliver better speed to their application users. Second, they can use WebAssembly to sandbox native code for better security. Third, they can save time and maintenance cost by sharing native code across platforms. However, currently, this “cross-language integration” is very complicated. The problem is that WebAssembly currently only supports numbers, so it becomes difficult in cases like passing a string between JS and WebAssembly. You will first have to convert the string into an array of numbers and then convert them back into a string. “This means the two languages can call each other’s functions. But if a function takes or returns anything besides numbers, things get complicated,” Clark explains. So, to get past this hurdle you either need to write “a really hard-to-use API that only speaks in numbers” or “add glue code for every single environment you want this module to run in.” This is why Clark and her team have come up with WebAssembly Interface Types. It will allow WebAssembly modules to interoperate with modules running in their own native runtimes and other WebAssembly modules written in different source languages. It will also be able to talk directly with the host systems. It will achieve all of this using rich APIs and complex types. Source: Mozilla WebAssembly Interface Types are different from the types we have in WebAssembly today. Also, there will not be any new operations added to WebAssembly because of them. All the operations will be performed on the concrete types on both communicating sides. Explaining how this will work, Clark wrote, “There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.” What WebAssembly developers think about this proposal The news sparked a discussion on Hacker News. A user commented that this could in the future prevent a lot of rewrites and duplication, “I'm very happy to see the WebIDL proposal replaced with something generalized.  The article brings up an interesting point: WebAssembly really could enable seamless cross-language integration in the future. Writing a project in Rust, but really want to use that popular face detector written in Python? And maybe the niche language tokenizer written in PHP? And sprinkle ffmpeg on top, without the hassle of target-compatible compilation and worrying about use after free vulnerabilities? No problem use one of the many WASM runtimes popping up and combine all those libraries by using their pre-compiled WASM packages distributed on a package repo like WAPM, with auto-generated bindings that provide a decent API from your host language.” Another user added, ”Of course, cross-language interfaces will always have tradeoffs. But we see Interface Types extending the space where the tradeoffs are worthwhile, especially in combination with wasm's sandboxing.” Some users are also unsure that this will actually work in practice. Here’s what a Reddit user said, “I wonder how well this will work in practice. effectively this is attempting to be universal language interop. that is a bold goal. I suspect this will never work for complicated object graphs. maybe this is for numbers and strings only. I wonder if something like protobuf wouldn't actually be better. it looked from the graphics that memory is still copied anyway (which makes sense, eg going from a cstring to a java string), but this is still marshalling. maybe you can skip this in some cases, but is that important enough to hinge the design there?” To get a deeper understanding of WebAssembly Interface Types, watch this explainer video by Mozilla: https://www.youtube.com/watch?time_continue=17&v=Qn_4F3foB3Q Also, check out Lin Clark’s article, WebAssembly Interface Types: Interoperate with All the Things. Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces
Read more
  • 0
  • 0
  • 3179
article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 9783

article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 3327