Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-dav1d-0-2-0-released-with-ssse3-support-improved-x86-performance-and-more
Amrata Joshi
05 Mar 2019
2 min read
Save for later

DAV1D 0.2.0 released with SSSE3 support, improved x86 performance and more

Amrata Joshi
05 Mar 2019
2 min read
Yesterday, the team behind DAV1D released DAV1D 0.2.0, the open-source AV1 video decoder which focuses on helping older desktop CPUs and mobile devices. The initial release, Dav1d 0.1  which was released three months ago, featured hand-written AVX2 code for running faster than the reference decoder on modern Intel/AMD CPUs. Though the stable version of DAV1D 0.2.0 is yet to be released. What’s new in DAV1D 0.2.0 SSSE3 Support The SSSE3 support is aimed at scaling the performance potential for older desktop CPUs. As per the Steam Hardware Survey (Feb. 2019), 97,23% of their user base supports SSSE3. x86 performance Dav1d 0.1.0 didn’t support older and lower-end processors but this release comes with support for processors not supporting AVX2. Also, there is NEON SIMD support now for ARM hardware. The performance of AVX2 has increased from 1% to 2% for dav1d. Mobile: NEON During the previous release, the speed using NEON assembly over C was around 80% which has been doubled now with DAV1D 0.2.0. Arm64 performance Performance for Arm64 has improved as there is 38% improvement for single-thread and a 53% improvement for multi-thread performances. 32-bit Arm (Armv7) The 32-bit Arm (Armv7) has also improved as most assembly code can be fairly easily ported. Major bug fixes This release comes with rewrite inverse transforms for avoiding overflows. The issues with un-decodable samples have been fixed. To know more about this news, check out the official post on Medium. dav1d 0.1.0, the AV1 decoder by VideoLAN, is here dav1d to release soon with all features of AV1, and better performance than libaom Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg    
Read more
  • 0
  • 0
  • 1324

article-image-the-ember-project-announces-version-3-8-of-ember-js-ember-data-and-ember-cli
Bhagyashree R
28 Feb 2019
2 min read
Save for later

The Ember project announces version 3.8 of Ember.js, Ember Data, and Ember CLI

Bhagyashree R
28 Feb 2019
2 min read
Yesterday, the community behind the Ember project released version 3.8 of the three sub-projects: Ember.js, Ember Data, and Ember CLI. Along with few bugfixes in Ember Data and Ember CLI, this release introduces two new features: element modifier manager and array helper. Updates in the Ember.js web framework Ember.js 3.8 is a long-term support candidate. This release is incremental, backward compatible and comes with two new features: element modifier manager and array helper. Element modifier manager Element modifier manager is a very low-level API, which will be responsible for coordinating the lifecycle events that are triggered when an element modifier is invoked, installed, and updated. Array helper Now you can create an array in a template with a new feature introduced in Ember.js 3.8, the {{array}} helper. The working of this helper is very similar to the already existing {{hash}} helper. Deprecations Computed property overridability: Computed properties in Ember.js are overridable by default when no setter is defined. As this behavior is bug-prone, it has been deprecated. The ‘readOnly()’ modifier that prevents this behavior will be deprecated once overridability has been removed. @ember/object#aliasMethod: This method, which allows you to add aliases to objects defined with EmberObject, is now deprecated as it is very little known and rarely used by developers. Component manager factory function: Now, setComponentManager does not require a string to associate the custom component class and the component manager. Instead, developers can pass a factory function that produces an instance of the component manager. Updates in Ember Data Not many changes have been made in this release of Ember Data. Along with updating the documentation, the team has updated ‘_scheduleFetch’ to ‘use _fetchRecord’ for belongsTo relationship. Updates in Ember CLI The {{content-for}} hook is updated to allow developers to use it in the same way when different types are specified, for instance, {{content-for 'head'}} {{content-for 'head-footer'}}. With this release, gitignore will ignore Yarn .pnp files. To read the entire list of updates, visit Ember’s official website. The Ember project announces version 3.7 of Ember.js, Ember Data, and Ember CLI The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 1424

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 3740
Banner background image

article-image-introducing-mint-a-new-http-client-for-elixir
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Introducing Mint, a new HTTP client for Elixir

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Elixir introduced Mint as their new low-level HTTP client that provides a small and functional core. It is connection based where each connection is a single structure with an associated socket belonging to the process that started the connection. Features of Mint Connections The HTTP connections of Mint are managed directly in the process that starts the connection. There is no connection pool which is used when a connection is opened. This helps users to build their own process structure that fits their application. Each connection has a single immutable data structure that the users can manage. Mint uses “active mode” sockets so the data and events from the socket are sent as messages to the process that started the connection. The user then passes the messages to the stream/2 function that further returns the updated connection and a list of “responses”. These responses get streamed back and the response is returned in partial response chunks. Process-less To many users, Mint may seem to be more cumbersome to use than other HTTP libraries. But by providing a low-level API without a predetermined process architecture, Mint gives flexibility to the user of the library. If a user writes GenStage pipelines, a pool of producers can fetch data from external sources via HTTP. With Mint, it is possible to have GenStage producer for managing its own connection while reducing overhead and simplifying the code. HTTP/1 and HTTP/2 The Mint.HTTP module has a single interface for both HTTP/1 and HTTP/2 connections which also performs version negotiation on HTTPS connections. Users can now specify HTTP version for choosing Mint.HTTP1 or Mint.HTTP2modules directly. Safe-by-default HTTPS When connecting with HTTPS, Mint performs certificate verification by default. Mint also uses an optional dependency on CAStore for providing certificates from Mozilla’s CA Certificate Store. Few users are happy about this news with  one user commenting on HackerNews, “I like that Mint keeps dependencies to a minimum.” Another user commented, “I'm liking the trend of designing runtime-behaviour agnostic libraries in Elixir.” To know more about this news, check out Mint’s official blog post. Elixir 1.8 released with new features and infrastructure improvements Elixir 1.7, the programming language for Erlang virtual machine, releases Elixir Basics – Foundational Steps toward Functional Programming  
Read more
  • 0
  • 0
  • 2635

article-image-npm-inc-announces-npm-enterprise-the-first-management-code-registry-for-organizations
Bhagyashree R
22 Feb 2019
2 min read
Save for later

npm Inc. announces npm Enterprise, the first management code registry for organizations

Bhagyashree R
22 Feb 2019
2 min read
Yesterday, npm Inc., the provider of the world’s largest software registry, announced npm Enterprise, which will be your company’s very own npm registry. This new service is designed for private registry hosting, workflow integrations, and provides compliance features for large companies. Bryan Bogensberger, CEO of npm Inc, said in a statement, “Approximately 100% of the world’s enterprises acquire over 97% of their JavaScript from the npm Public Registry, making the introduction of npm Enterprise essential for the professionalization of JavaScript development. With npm Enterprise, we are giving JavaScript developers the npm tools they love while providing the enterprise with enhanced visibility, security, and control. The result: happiness throughout organizations everywhere.” The npm Enterprise service comes with the following features and advantages: Companies will have a “companyname.npme.io” website with support for industry-standard SSO authentication to control developer access and other permissions. Allows easy code discovery and sharing within a company. You can securely deploy a package. Access to unlimited Orgs and scopes. Orgs allows a team of contributors to read, write, and publish public or private scoped packages. Users will able to access all the packages that are available in the public registry. It provides audit reports that contain tables of information about security vulnerabilities in your project’s dependencies. With the help of these audit reports, you can fix the vulnerability or troubleshoot further. To avoid any kind of conflict, teams can use the unlimited namespaces npm Enterprise comes with to share and manage code. The npm Enterprise service provides three roles: Billing Manager, Admin user, and End-user. The admin users will have the most far-reaching permissions on the Enterprise instance. They will manage instance settings, Orgs, users, and packages. The billing manager will be responsible for updating the payment method for your Enterprise instance. To learn more about npm Enterprise, visit npm’s official website. npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn npm v6 is out! npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more
Read more
  • 0
  • 0
  • 2316

article-image-mozilla-shares-key-takeaways-from-the-design-tools-survey
Bhagyashree R
21 Feb 2019
2 min read
Save for later

Mozilla shares key takeaways from the Design Tools survey

Bhagyashree R
21 Feb 2019
2 min read
Last year in November, Victoria Wang, a UX designer at Mozilla, announced the initiation of a Design Tools survey. The motivation behind this survey was to get an insight into the biggest CSS and web design issues developers and designers face. She shared the results of this survey yesterday. This survey received more than 900 responses, which revealed the following issues: Source: Mozilla One of the main takeaways from this survey was that developers and designers, irrespective of their experience level, want to better understand CSS issues like unexpected scrollbars and sizing. Cross-browser compatibility was also one of the top issues. Now, the Firefox DevTools team is trying to find out ways to ease the pain of debugging browser difference, including auditing, hints, and a more robust responsive design tool. Some of the mid-ranked issues included building Flexbox layouts, building with CSS Grid Layout, and ensuring accessibility. To address this, the team is planning to improve the Accessibility Panel. Among the lowest-ranked issues included Lack of good Visual/WYSIWYG tools, Animations, WebGL, and SVG. Source: Mozilla The top issues developers face when working with a browser tool was “No easy way to move CSS changes back to the editor”. To address this issue the Mozilla team is planning to add export options to their Changes Panel and also introduce DOM breakpoints. You can read more about this Design Tools survey on Mozilla’s official website. Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant Mozilla releases Firefox 65 with support for AV1, enhanced tracking protection, and more!
Read more
  • 0
  • 0
  • 3406
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-introducing-r-factor-a-refactoring-tool-for-react-and-redux
Bhagyashree R
19 Feb 2019
2 min read
Save for later

Introducing R-Factor, a refactoring tool for React and Redux

Bhagyashree R
19 Feb 2019
2 min read
Yesterday, Kamil Mielnik, a JavaScript Developer who specializes in React technology stack, introduced a new automated refactoring tool for React and Redux called R-Factor. Just like C# and Java programmers, who are spoiled with a variety of refactoring tools, JavaScript developers can use this tool to save their time. With this refactoring tool, React developers will not have to spend time writing very common code manipulations. R-Factor does not break your code, keeps your formatting, and can refactor a file in a reasonable amount of time. It comes with a set of 20 automated refactorings. Out of this 20 automated refactorings, 10 are for React components, 8 are for Redux, and the remaining two are other refactorings. It also provides 16 configuration options, using which you can match your code formatting, naming, and other preferences. Though the initial goal of this tool was to only cover React and Redux refactoring, in the future, we could see some features going beyond React and Redux. Following are some of the refactorings that are introduced: Add className: This will add the className prop to a component and apply it to its root JSX element. Convert SVG to component: With this refactoring, you can convert an SVG into a React component. Convert to arrow component: This will turn a component into a functional component defined as an arrow function. Convert to function component: You can convert a component to a functional component defined as a function. Connect: This connects a component to the Redux store with both mapStateToProps & mapDispatchToProps generated. This refactoring tool is supported on Windows, Linux, and macOS and the supported editors include Atom, Sublime Text 3, and VSCode. For using the R-Factor tool, you need to buy the license key. Before you actually start using it on your project, you can try the R-Factor tool online. To know more in detail, check out the official website of R-Factor. React Native 0.59 RC0 is now out with React Hooks, and more Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] React 16.8 releases with the stable implementation of Hooks
Read more
  • 0
  • 0
  • 3348

article-image-google-to-launch-the-early-access-program-of-the-dev-domain-a-dedicated-space-for-developers
Bhagyashree R
18 Feb 2019
3 min read
Save for later

Google to launch the early access program of the .dev domain, a dedicated space for developers

Bhagyashree R
18 Feb 2019
3 min read
Last year in November at the Chrome Dev Summit keynote, Google introduced .dev, a domain dedicated to developers and technology. The registration process has already started on Feb 16 and the team is set to launch its Early Access Program. The registration process has already started on Feb 16. According to the timeline shared at the Chrome Dev Summit, the Early Access Program will start on Feb 19th at 8:00 am PST to February 28th at 7:59 am PST. Under this program, users can register available .dev domains by giving an extra fee. This fee will decrease as we get closer to the General Availability phase, which starts February 28 onwards. After registering the domain, users will be required to pay $12/year cost for .dev domains. In addition to a dedicated space for developers, this domain will provide built-in security, as it is included on the HSTS (HTTP Strict Transport Security) preload list. This essentially means that all the connections to .dev websites and pages will be made using HTTPS. Looking at Google’s track record of killing its products over time, some Hacker News users were little skeptical about this service. One user commented, “I wouldn't lease the domain through Google domains. Use a different registrar --- if possible, one that you'll be able to trust. That registrar will work with the registry of the TLD, which would be google in this case, and has a much better chance of actually resolving issues than if you were a direct customer of Google Domains.” Another user said, “They have a well-established track record of enthusiastically backing exciting new projects way outside of their core competency just to dump them like hot garbage several years later...It doesn't seem like a smart move to lease a domain from a politically active mega-monopoly that might decide to randomly become your competitor in 2 years.”  Countering this argument, one of the Google developers from the team launching .dev said,  “You'll be glad to know that TLDs can't simply be discontinued like other products might be. ICANN doesn't allow it. The procedures in place preventing a live TLD from shutting down are called EBERO.” Read more about the .dev domain on its official website. Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report
Read more
  • 0
  • 0
  • 1979

article-image-google-chrome-developers-clarify-the-speculations-around-manifest-v3-after-a-study-nullifies-their-performance-hit-argument
Bhagyashree R
18 Feb 2019
4 min read
Save for later

Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument

Bhagyashree R
18 Feb 2019
4 min read
On Friday, a study was published on WhoTracks.me where the performance of the most commonly used ad blockers was analyzed. This study was motivated by the recent Manifest V3 controversy, which reveals that Google developers are planning to introduce an update that could lead to crippling all ad blockers. What update Chrome developers are introducing? The developers are planning to introduce an alternative to the webRequest API named the declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. The Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. One of the ad blocker maintainers have reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” What the study by Ghostery revealed? This study addresses the performance argument made by the developers. For this study, the Ghostery team analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz'z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. How Google developers reacted to this study and all the feedbacks surrounding Manifest V3? Following the publication of the study and after looking at the feedbacks, Devlin Cronin, a Software Engineer at Google, clarified that these changes are not really meant to prevent content blocking. Cronin added that the changes listed in Manifest V3 are still in the draft and design stage. In the Google group, Manifest V3: Web Request Changes, Cronin said, “We are committed to preserving that ecosystem and ensuring that users can continue to customize the Chrome browser to meet their needs. This includes continuing to support extensions, including content blockers, developer tools, accessibility features, and many others. It is not, nor has it ever been, our goal to prevent or break content blocking.” The team is not planning to remove the webRequest API. Cronin added, “In particular, there are currently no planned changes to the observational capabilities of webRequest (i.e., anything that does not modify the request).” Based on the feedback and concerns shared, the Chrome team did do some revisions including adding support for the dynamic rule to the declarativeNetRequest API. They are also planning to increase the ruleset size, which was 30k earlier. Users are, however, not convinced by this clarification. One user commented on Hacker News, “Keep in mind that their story about performance has been shown to be a complete lie. There is no performance hit from using webRequest like this. This is about removing sophisticated ad blockers in order to defend Google's revenue stream, plain and simple.” Coincidentally, a Chrome 72 upgrade seems to break ad blockers in a way that they can’t see or block analytics anymore if the web page uses a service worker. https://twitter.com/jviide/status/1096947294920949760 Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report Google announces the general availability of a new API for Google Docs
Read more
  • 0
  • 0
  • 3259

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 3947
article-image-how-you-can-replace-a-hot-path-in-javascript-with-webassembly
Bhagyashree R
15 Feb 2019
5 min read
Save for later

How you can replace a hot path in JavaScript with WebAssembly

Bhagyashree R
15 Feb 2019
5 min read
Yesterday, Das Surma, a Web Advocate at Google, shared how he and his team replaced a JavaScript hot path in the Squoosh app with WebAssembly. Squoosh is an image compression web app which allows you to compress images with a variety of codecs that have been compiled from C++ to WebAssembly. Hot paths are basically code execution paths where most of the execution time is spent. With this update, they aimed to achieve predictable performance across all browsers. Its strict typing and low-level architecture enable more optimizations during compilation. Though JavaScript can also achieve similar performance to WebAssembly, it is often difficult to stay on the fast path. What is WebAssembly? WebAssembly, also known as Wasm, provides you with a way to execute code written in different languages at near-native speed on the way. It is a low-level language with a compact binary format, which provides C/C++/Rust as the compilation target so that they can run on the web. When you compile a C or Rust code to WebAssembly, you get a .wasm file. This file contains something called “module declaration”. In addition to the binary instructions for the functions contained within, it contains all the imports the module needs from its environment and a list of exports this module provides to the host. Comparing the file size generated To narrow down the language, Surma gave an example of a JavaScript function that rotates an image by multiples of 90 degrees. This function basically iterates over every pixel of an image and copies it to a different location. This function was written in three different languages, C/C++, Rust, AssemblyScript, and was compiled to WebAssembly. C and Emscripten Emscripten is a C compiler that allows you to easily compile your C code to WebAssembly. After porting the entire JavaScript code to C and compiling it with emcc, Emscripten creates a glue code file called c.js and wasm module called c.wasm. The wasm module gzipped to almost 260 bytes and the c.js file was of the size 3.5 KB. Rust Rust is a programming language syntactically similar to C++. It is designed to provide better memory and thread-safety. The Rust team has introduced various tooling to the WebAssembly ecosystem, and one of them is wasm-pack. With the help of wasm-pack, developers can turn their code into modules that work out-of-the-box with bundlers like Webpack. After compiling the Rust code using wasm-pack, a 7.6 KB wasm module was generated with about 100 bytes of glue code. AssemblyScript AssemblyScript compiles a strictly-typed subset of TypeScript to WebAssembly ahead of time. It uses the same syntax as TypeScript but switches the standard library with its own. This essentially means that you can’t just compile any TypeScript to WebAssembly, but you don’t have to learn a new programming language to write WebAssembly. After installing the AssemblyScript file, with the help of the AssemblyScript/assemblyscript npm package, AssemblyScript provides with a wasm module of at least 300 bytes and no glue code. The module can directly work with vanilla WebAssembly APIs. Comparing the size of files generated by compiling the above three languages, Rust gave the biggest file. Comparing the performance To analyze the performance, the team did speed comparison per language and speed comparison per browser. They shared the results in the following two graphs: Source: Google Developers The graphs show that all the WebAssembly modules were executed in ~500ms or less, which proves that WebAssembly gives a predictable performance. Regardless of which language you choose, the variance between browsers and languages is minimal. The standard deviation of JavaScript across all browsers is ~400ms. And, the standard deviation of all our WebAssembly modules across all browsers is ~80ms. Which language you should choose if you have a JS hot path and want to make it faster with WebAssembly? Looking at the above results, the best choice seems to be C or AssemblyScript, but they decided to go with Rust. They narrowed down to Rust because all the codecs shipped in Squoosh so far are compiled using Emscripten and the team wanted to broaden their knowledge about the WebAssembly ecosystem by using a different language. They did not choose AssemblyScript because it is relatively new and the compiler is not as mature as Rust. The file size difference between Rust and other languages were quite huge but in reality, this is not a big deal. Going by the runtime performance, Rust showed a faster average across browsers than AssemblyScript. Additionally, Rust will be more likely to produce faster code without requiring any manual code optimizations. To read more in detail, check out Surma’s post on Google Developers. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Creating and loading a WebAssembly module with Emscripten’s glue code [Tutorial] The elements of WebAssembly – Wat and Wasm, explained [Tutorial]
Read more
  • 0
  • 0
  • 3739

article-image-bootstrap-5-to-replace-jquery-with-vanilla-javascript
Bhagyashree R
13 Feb 2019
2 min read
Save for later

Bootstrap 5 to replace jQuery with vanilla JavaScript

Bhagyashree R
13 Feb 2019
2 min read
The upcoming major version of Bootstrap, version 5, will no longer have jQuery as a dependency and will be replaced with vanilla JavaScript. In 2017, the Bootstrap team opened a pull request with the aim to remove jQuery entirely from the Bootstrap source and it is now near completion. Under this pull request, the team has removed jQuery from 11 plugins including Util, Alert, Button, Carousel, and more. Using ‘Data’ and ‘EventHandler’ in unit tests is no longer supported. Additionally, Internet Explorer will not be compatible with this version. Despite these updates, developers will be able to use this version both with or without jQuery. Since this will be a major release, users can expect a few breaking changes. Not only just Bootstrap but many other companies have been thinking of decoupling from jQuery. For example, last year, GitHub incrementally removed jQuery from their frontend mainly because of the rapid evolution of web standards and jQuery losing its relevancy over time. This news triggered a discussion on Hacker News, and many users were happy about this development. One user commented, “I think the reason is that many of the problems jQuery was designed to solve (DOM manipulation, cross-browser compatibility issues, AJAX, cool effects) have now been implemented as standards, either in Javascript or CSS and many developers consider the 55k minified download not worth it.” Another user added, “The general argument now is that 95%+ of jQuery is now native in browsers (with arguably the remaining 5% being odd overly backward compatible quirks worth ignoring), so adding a JS dependency for them is "silly" and/or a waste of bandwidth.” Read more in detail, check out Bootstrap’s GitHub repository. jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher GitHub parts ways with JQuery, adopts Vanilla JS for its frontend Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 21262

article-image-next-js-8-releases-with-a-serverless-mode-better-build-time-memory-usage-and-more
Bhagyashree R
12 Feb 2019
3 min read
Save for later

Next.js 8 releases with a serverless mode, better build-time memory usage, and more

Bhagyashree R
12 Feb 2019
3 min read
After releasing Next.js 7 in September last year, the team behind Next.js released the production-ready Next.js 8, yesterday. This release comes with a serverless mode, build-time memory usage reduction, prefetch performance improvements, security improvements, and more. Similar to previous releases, all the updates are backward compatible. The following are some of the updates Next.js 8 comes with: Serverless mode The serverless deployment comes with various benefits including more reliability, scalability, and separation of concerns by splitting an application into smaller parts. These smaller parts are also called as lambdas. To provide these benefits of serverless deployment to Next.js users, this version comes with a serverless mode in which each page in the ‘page’ directory will be treated as a lambda. It will also come with low-level APIs for implementing serverless deployment. Better build-time memory usage The Next.js team, with the Webpack team, has worked towards improving the build performance and resource utilization of Next.js and Webpack. This collaboration has resulted in up to 16 times better memory usage with no degradation in performance. This improvement ensures that memory gets released much more quickly and no processes crash under stress. Prefetch performance improvements Next.js supports prefetching pages for faster navigation. Earlier, users were required to inject a ‘script’ tag into the document ‘body’, which caused an overhead while opening pages. In Next.js 8, the ‘prefetch’ attribute uses link rel=”preload” instead of a 'script' tag. Now the prefetching start after onload to allow the browser to manage resources. In addition to removing the overhead, this version also disables prefetch on slower network connections by detecting 2G internet and navigator.connection.saveData mode. Security improvements In this version, a new ‘crossOrigin’ config option is introduced to ensure that all ‘script’ tags have the ‘cross-origin’ set. Also, with this new config option, you do not require ‘pages/_document.js’ to set up cross-origin in your application. Another security improvement includes removing the inline JavaScript. In previous versions, users were required to include script-src 'unsafe-inline' in their policy to enable Content Security Policy. This was done because Next.js was creating an inline ‘script’ tag to pass data. In this version, the inline script tag is changed to a JSON tag for safe transfer to the client. This essentially means Next.js no longer includes no inline scripts anymore. To read about other updates introduced in Next.js 8, check out its official announcement. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly 16 JavaScript frameworks developers should learn in 2019 Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!
Read more
  • 0
  • 0
  • 2811
article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 8011

article-image-react-16-8-releases-with-the-stable-implementation-of-hooks
Bhagyashree R
07 Feb 2019
2 min read
Save for later

React 16.8 releases with the stable implementation of Hooks

Bhagyashree R
07 Feb 2019
2 min read
Yesterday, Dan Abramov, one of the React developers, announced the release of React 16.8, which comes with the feature everyone was waiting for, “Hooks”. This feature first landed in React 16.7-alpha last year and now it is available in this stable release. This stable implementation of React Hooks is available for React DOM, React DOM Server, React Test Renderer, and React Shallow Renderer. Hooks are also supported by React DevTools and the latest versions of Flow, and TypeScript. Developers are recommended to enable a new lint rule called eslint-plugin-react-hooks that enforces best practices with Hooks. It will also be included in the Create React App tool by default. What are Hooks? At the React Conf 2018, Sophie Alpert and Dan Abramov explained what are the current limitations in React and how they can be solved using Hooks. React Hooks are basically functions that allow you to “hook into” or use React state and other lifecycle features via function components. Hooks comes with various advantages such as enabling easy reuse of React components, splitting related components, and use React without classes. What’s new in React 16.8? Currently, Hooks do not support all use cases for classes, but soon it will. Only two methods, that is, getSnapshotBeforeUpdate() and componentDidCatch(), don’t have their Hooks API counterpart. A new API named ReactTestUtils.act() is introduced in this stable release. This API ensures that the behavior in your tests matches what happens in the browser more closely. Dan Abramov in a post recommended wrapping code rendering and triggering updates to their components into act() calls. Other changes include: The useReducer Hook lazy initialization API is improved Support for synchronous thenables is added to React.lazy() Components are rendered twice with Hooks in Strict Mode (DEV-only) similar to class behavior A warning is shown when returning different hooks on subsequent renders The useImperativeMethods Hook is renamed to useImperativeHandle The Object.is algorithm is used for comparing useState and useReducer values To use Hooks, you need to update all the React packages to 16.8 or higher. On a side note, React Native will support Hooks starting from React Native 0.59 release. Read all the updates in React 16.8 on their official website. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering”
Read more
  • 0
  • 0
  • 3099