Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-introducing-web-high-level-shading-language-whlsl-a-graphics-shading-language-for-webgpu
Bhagyashree R
14 Nov 2018
3 min read
Save for later

Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU

Bhagyashree R
14 Nov 2018
3 min read
Yesterday, the W3C GPU for the Web Community Group introduced a new graphics shading language for the WebGPU API called Web High Level Shading Language (WHLSL, pronounced “whistle”). The language extends HLSL to provide better security and safety. Last year, a W3C GPU for the Web Community Group was formed by the engineers from Apple, Mozilla, Microsoft, Google, and others. This group is working towards bringing in a low-level 3D graphics API to the Web called WebGPU. WebGPU, just like other modern 3D graphics API, uses shaders. Shaders are programs that take advantage of the specialized architecture of GPUs. For instance, apps designed for Metal use the Metal Shading Language, apps designed for Direct3D 12 use HLSL, and apps designed for Vulkan use SPIR-V or GLSL. That’s why the WebKit team introduced WHLSL for the WebGPU API. Here are some of the requirements WHLSL aims to fulfill: Need for a safe shader language Irrespective of what an application does, the shader should only be allowed to read or write data from the Web page’s domain. Without this safety insurance, malicious websites can run a shader that reads pixels out of other parts of the screen, even from native apps. Well-specified language To ensure interoperability between browsers, a shading language for the Web must be precisely specified. Also, often rendering teams write shaders in their own custom in-house language, and are later cross-compiled to whichever language is necessary. That is why the shader language should have a reasonably small set of unambiguous grammar and type checking rules that compiler writers can reference when emitting this language. Translatable to other languages As WebGPU is designed to work on top of Metal, Direct3D 12, and Vulkan, the shader should be translatable to Metal Shading Language, HLSL (or DXIL), and SPIR-V. There should be a provision to represent the shaders in a form that is acceptable to APIs other than WebGPU. Performant language To provide an overall improved performance the compiler needs to run quickly and programs produced by the compiler need to run efficiently on real GPUs. Easy to read and write The shader language should be easy to read and write for a developer. It should be familiar to both GPU and CPU programmers. GPU programmers are important clients as they have experience in writing shaders. As GPUs are now popularly being used in various fields other than rendering including machine learning, computer vision, and neural networks, the CPU programmers are also important clients. To learn more in detail about WLHSL, check out WebKit’s post. Working with shaders in C++ to create 3D games Torch AR, a 3D design platform for prototyping mobile AR Bokeh 1.0 released with a new scatter, patches with holes, and testing improvements
Read more
  • 0
  • 0
  • 5132

article-image-mozilla-introduces-new-firefox-test-pilot-experiments-price-wise-and-email-tabs
Amrata Joshi
13 Nov 2018
2 min read
Save for later

Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs

Amrata Joshi
13 Nov 2018
2 min read
Test Pilot is an important part of Mozilla Firefox, and allows Mozilla to test out new features and tools that aim to improve the experience of Firefox users. Yesterday, the organization launched two new Test Pilot projects: Price Wise and Email Tabs. Price Wise allows users to track the price of items online, while Email Tabs makes it easier for people to share links via email. How Price Wise worksEssentially, Price Wise is a price-tracking tool. It allows users to add certain products to a watch list; Price Wise will send notifications when there are changes in price. The extension only works for eBay, Best Buy, Amazon, Walmart, and Home Depot, but there are apparently plans to extend its usage to other retailers and eCommerce sites. As holiday season is approaching, it makes sense for Mozilla to push it out to users. You can try it out here. How Email Tabs works Email Tabs is a tool which helps users to send links via email. Typically, you’d need to copy and paste links into your email, but with Email Tabs, you can share from a whole list of tabs. But that’s not all. Users can also choose how the content should be presented in the email. So, it could be a simple link, a screenshot, or even the full text. At the moment this only works with Gmail, but like Price Wise, Mozilla is looking to extend the roll out. You can try Email Tabs here. Both experiments are available for anybody who is signed up to the Test Pilot program. https://youtu.be/UpRLjTQmkW4 Mozilla previews Send, Color and Side View Mozilla also previewed other experiments that are due for release this year. Send allows you to encrypt and share large files up to 1GB, Color, allows users to customize the look of Firefox, while Side View makes comparison shopping easier, as one can look at two products without having to switch back and forth between two separate web pages. To learn more, visit the Firefox website of Firefox. Mozilla shares how AV1, the new the open source royalty-free video codec, works Mozilla announces WebRender, the experimental renderer for Servo, is now in beta Mozilla funds winners of the 2018 Creative Media Awards for highlighting unintended consequences of AI in society
Read more
  • 0
  • 0
  • 2256

article-image-day-1-of-chrome-dev-summit-2018-new-announcements-and-googles-initiative-to-close-the-gap-between-web-and-native
Sugandha Lahoti
13 Nov 2018
4 min read
Save for later

Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native

Sugandha Lahoti
13 Nov 2018
4 min read
The 6th Chrome Dev Summit 2018 is being hosted on the 12th and 13th of this month in San Francisco. Yesterday, Day 1 of the summit was opened by Ben Galbraith, the director of Chrome, to talk about “the web platform’s latest advancements and the evolving landscape.” Leading web developers described their modern web experiences as well. Major Chrome Dev Summit 2018 announcements included web.dev, a new developer resource website, and a demonstration of VisBug, a browser-based visual development tool. The summit also included a demo of a new web tool called Squoosh that can downsize, compress, and reformat images. The Chrome Dev Summit 2018 also highlighted some of the browser APIs currently in development, including Web Share Target, Wake Lock, WebHID and more. It also featured a Writable File API currently under development, which would allow web apps to edit local files. New web-based tools and resources web.dev The web.dev resource website provides an aggregation of information for modern Web APIs. It helps users monitor their sites over time to ensure that they can keep their site fast, resilient and accessible. web.dev is created in partnership with Glitch, and has a deep integration with Google’s Lighthouse tool. VisBug Another developer tool VisBug helps developers easily edit a web page using a simple point-and-click and drag and drop interface. This is an improvement over Firebug, Google’s previous tool, which used the website’s source code. VisBug is currently available as a Chrome extension that can be installed from the main Chrome Web Store. Squoosh The Squoosh tool allows you to encode images using best-in-class codecs like MozJPEG, WebP, and OptiPNG. It works cross-browser and offline, and ALL codecs supported even in a browser with no native support using WASM. The app is able to do 1:1 visual comparison of the original image and its compressed counterpart, to help users understand the pros and cons of each format. Closing the gap between web and native Google is also taking initiatives to close the gap between the web and native and make it easy for developers to build great experiences on the open web. Regarding this, Chrome will work with other browser vendors to ensure interoperability and get early developer feedback. Proposals will be submitted to the W3C Web Incubator Community Group for feedback. According to Google, this open development process will be “no different than how we develop every other web platform feature.” The first initiative in this aspect is the writable files API. The Writable Files API Currently, under development, the writable files API is designed to increase the interoperability of web applications with native applications. Users can choose files or directories that a web app can interact with on the native file system. They don’t have to use a native wrapper like Electron to ship their web app. With the Writable Files API, users can create a simple, single file editor that opens a file, allows the user to edit it, and save the changes back to the same file. People were surprised that it was Google who jumped on this process rather than Mozilla which has already implemented version of a lot of these APIs. A hacker news user said, “I guess maybe not having that skin in the game anymore prevented those APIs from becoming standardized? But these are also very useful for desktop applications. Anyways, this is a great initiative, it's about time a real effort was made to close that gap.” Here’s a video playlist of all the Chrome Dev Summit sessions so far. Tune into Google’s livestream to follow the rest of the sessions of the day and watch this space for more exciting announcements. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Read more
  • 0
  • 0
  • 3146
Visually different images

article-image-basecamp-3-faces-a-read-only-outage-of-nearly-5-hours
Bhagyashree R
13 Nov 2018
3 min read
Save for later

Basecamp 3 faces a read-only outage of nearly 5 hours

Bhagyashree R
13 Nov 2018
3 min read
Yesterday, Basecamp shared the cause behind the outage Basecamp 3 faced on November 8. The outage continued for nearly five hours starting from 7:21 am CST to 12:11 pm. Due to this, the users were only able to access existing messages, to-do lists, and files, but they were prevented from entering any new information and altering any existing information. David Heinemeier Hansson, the creator of Ruby on Rails, founder & CTO at Basecamp said in his post that this was the worst outage Basecamp has faced in probably 10 years: “It’s bad enough that we had the worst outage at Basecamp in probably 10 years, but to know that it was avoidable is hard to swallow. And I cannot express my apologies clearly or deeply enough.” https://twitter.com/basecamp/status/1060554610241224705 Key causes behind the Basecamp 3 outage Every activity that a user does is tracked in Basecamp’s events table, whether it is posting a message, updating a to-do list, or applauding a comment. The root cause behind the Basecamp going into read-only mode was its database hitting the ceiling of 2,147,483,647 on this very busy events table. Secondly, the programming framework that Basecamp uses, Ruby on Rails updated their default for database tables in version 5.1 released in 2017. This update lifted the headroom for records from 2,147,483,647 to 9,223,372,036,854,775,807 on all tables. But, the column in the database was configured as an integer rather than a big integer. The complete timeline of the outage Time Activity 7:21 am CST   They ran out of ID numbers on the events table in the database because the column in the database was configured as an integer rather than a big integer. The integer runs out of numbers at 2147483647 and big integer can grow until 9223372036854775807. 7:29 am CST The team started working on database migration where they updated the column type from the regular integer to the big integer type. They later tested this fix on a staging database to make sure it was safe. 7:52 am CST The test done on the staging database verified that the fix was correct, so they moved on to make the changes to the production database table. Due to the huge size of the production database, the migration was estimated to take about one hour and forty minutes. 10:56 am CST-11:52 am CST The upgrade to the database was completed, but still, verification of all the data, and configurations update was required to ensure no other problems are faced when it is back online. 12:22 pm CST After the successful verification, Basecamp came back online. 12:33 pm CST Basecamp went down again because of the intense load of the application was back online, which caused the caching server to get overwhelmed. 12:41 pm CST Basecamp came back online after they switched over to the backup caching servers. To read the entire update on Basecamp’s outage, check out David Heinemeier Hansson’s post on Medium. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Azure DevOps outage root cause analysis starring greedy threads and rogue scale units
Read more
  • 0
  • 0
  • 3166

article-image-http-over-quic-will-be-officially-renamed-to-http-3
Savia Lobo
12 Nov 2018
2 min read
Save for later

HTTP-over-QUIC will be officially renamed to HTTP/3

Savia Lobo
12 Nov 2018
2 min read
The protocol called HTTP-over-QUIC will be officially renamed to  HTTP/3. In a discussion on IETF mail archive thread, Mark Nottingham, Chairman of the IETF HTTPBIS Working Group and W3C Web Services Addressing Working Group, triggered the confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. QUIC, a TCP replacement done over UDP, was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol. The QUIC Working Group in the IETF works on creating the QUIC transport protocol. According to Daniel Stenberg, lead developer of curl at Mozilla, “When the work took off in the IETF to standardize the protocol, it was split up in two layers: the transport and the HTTP parts. The idea being that this transport protocol can be used to transfer other data too and it’s not just done explicitly for HTTP or HTTP-like protocols. But the name was still QUIC.” People in the community have referred different versions of the protocol using informal names such as iQUIC and gQUIC to separate the QUIC protocols from IETF and Google. The protocol that sends HTTP over "iQUIC" was called "hq" (HTTP-over-QUIC) for a long time. Last week, on November 7, 2018, Dmitri Tikhonov, a programmer at Litespeed announced that his company and Facebook had successfully done the first interop ever between two HTTP/3 implementations. Here’s Mike Bihop's follow-up presentation at the HTTPbis session on the topic. https://www.youtube.com/watch?v=uVf_yyMfIPQ&feature=youtu.be&t=4956 Brute forcing HTTP applications and web applications using Nmap [Tutorial] Phoenix 1.4.0 is out with ‘Presence javascript API’, HTTP2 support, and more! Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 2804

article-image-security-issues-in-nginx-http-2-implementation-expose-nginx-servers-to-dos-attack
Bhagyashree R
12 Nov 2018
2 min read
Save for later

Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack

Bhagyashree R
12 Nov 2018
2 min read
Last week, two security issues were reported in nginx HTTP/2 implementation, which can result in excessive memory consumption and CPU usage. Along with these, an issue was found in ngx_http_mp4_module, which can be exploited by an attacker to cause a DoS attack. The issues in the HTTP/2 implementation happen if ngnix is compiled with the ngx_http_v2_module and the http2 option of the listen directive is used in a configuration file. To exploit these two issues, attackers can send specially crafted HTTP/2 requests that can lead to excessive CPU usage and memory usage, eventually triggering a DoS state. These issues affected nginx 1.9.5 - 1.15.5 and are now fixed in nginx 1.15.6, 1.14.1. In addition to these, a security issue was also identified in the ngx_http_mp4_module, which might allow an attacker to cause an infinite loop in a worker process. This can result in crashing the worker process or disclose its memory by using a specially crafted mp4 file. This issue only affects nginx if it is built with the ngx_http_mp4_module and the mp4 directive is used in the configuration file. The attack is only possible if an attacker is able to trigger processing of a specially crafted mp4 file with the ngx_http_mp4_module. This issue affects nginx 1.1.3+, 1.0.7+ and is now fixed in 1.15.6, 1.14.1. You can read more about these security issues in nginx at its official website. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Introducing Howler.js, a Javascript audio library with full cross-browser support
Read more
  • 0
  • 0
  • 3143
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-facebooks-graphql-moved-to-a-new-graphql-foundation-backed-by-the-linux-foundation
Bhagyashree R
09 Nov 2018
3 min read
Save for later

Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation

Bhagyashree R
09 Nov 2018
3 min read
On Tuesday, The Linux Foundation announced that Facebook’s GraphQL project has been moved to a newly-established GraphQL Foundation, which will be hosted by the non-profit Linux Foundation. This foundation will be dedicated to enable widespread adoption and help accelerate the development of GraphQL and the surrounding ecosystem. GraphQL was developed by Facebook in 2012 and was later open-sourced in 2015. It has been adopted by many companies in production including Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest, and Yelp. Why GraphhQL Foundation has been created? The foundation will provide a neutral home for the community to collaborate and encourage more participation and contribution. The community will be able to spread responsibilities and costs for infrastructure which will help in increasing the overall investment. This neutral governance will also ensure equal treatment in the community. The co-creator of GraphQL, Lee Byron said: “As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support.” The foundation will also provide more resources for the GraphQL community which will benefit all contributors. It will help in organizing events and working groups, formalizing governance structures, providing marketing support to the project, and handling IP and other legal issues as they arise. The Executive Director of The Linux Foundation, Jim Zemlin believes that this new foundation will ensure the long-term support for GraphQL: “We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.” In the next few months, The Linux Foundation with Facebook and the GraphQL community will be finalizing the founding members of the GraphQL Foundation. Read the full announcement on The Linux Foundation’s website and also check out the GraphQL Foundation’s website. Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right 7 reasons to choose GraphQL APIs over REST for building your APIs Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 2571

article-image-introducing-apollo-graphql-platform-for-product-engineering-teams-of-all-sizes-to-do-graphql-right
Bhagyashree R
08 Nov 2018
3 min read
Save for later

Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right

Bhagyashree R
08 Nov 2018
3 min read
Yesterday, Apollo introduced its Apollo GraphQL Platform for product engineering teams. It is built on Apollo's core open source GraphQL client and server and comes with additional open source devtools and cloud services. This platform is a combination of open source components, commercial extensions, and cloud services. The following diagram depicts its architecture: Source: Apollo GraphQL The Apollo GraphQL platform consists of the following components: Core open source components Apollo Server: It is a JavaScript GraphQL server used to define a schema and a set of resolvers that implement each part of that schema. It supports AWS Lambda and other serverless environments. Apollo Client: It is a GraphQL client that manages data and state in an application. It comes with integrations for React, React Native, Vue, Angular, and other view layers. iOS and Android clients: These clients allows to query a GraphQL API from native iOS and Android applications. Apollo CLI: It is a command line client that provides access to Apollo cloud services. Cloud services Schema registry: It is a central registry that acts as a central source of truth for a schema. It propagates all changes and details of your data,allowing multiple teams to collaborate with full visibility and security on a single data graph. Client registry: It is a registry that enables you to track each known consumer of a schema, which can include both pre-registered and ad-hoc clients. Operation registry: It is a registry of all the known operations against the schema, which similarly can include both pre-registered and ad-hoc operations. Trace warehouse: It is a data pipeline and storage layer that captures structured information about each GraphQL operation processed by an Apollo Server. Apollo Gateway GraphQL gateway is the commercial plugin for Apollo Server. It allows multiple teams to collaborate on a single, organization-wide schema without mixing everyone’s code together in a monolithic single point of failure. To do that, the gateway deploys “micro-schemas” that reference each other into a single master schema. This master schema then looks to a client just like any regular GraphQL schema. Workflows In addition to these components, Apollo also implements some useful workflows for managing a GraphQL API. Some of these workflows are: Schema change validation: It checks the compatibility of a given schema against a set of previously-observed operations using the trace warehouse, operation registry, and (typically) the client registry. Safelisting: Apollo provides an end-to-end mechanism for safelisting known clients and queries, a recommended best practice that limits production use of a GraphQL API to specific pre-arranged operations. To read the full announcement check out Apollo’s official announcement. Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’ 7 reasons to choose GraphQL APIs over REST for building your APIs Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives
Read more
  • 0
  • 0
  • 2735

article-image-phoenix-1-4-0-is-out-with-presence-javascript-api-http2-support-and-more
Savia Lobo
08 Nov 2018
2 min read
Save for later

Phoenix 1.4.0 is out with ‘Presence javascript API', HTTP2 support, and more!

Savia Lobo
08 Nov 2018
2 min read
Yesterday, the Phoenix web framework announced the release of its latest version, Phoenix 1.4. This release includes new features such as an HTTP2 support, improved development experience with faster compile times, new error pages, and local SSL certificate generation. The community also shipped a new and improved Presence javascript API. Features in the Phoenix 1.4.0 phx_new archive via hex The mix phx.new archive can now be installed via hex, for a simpler, versioned installation experience. The existing Phoenix applications will continue to work on Elixir 1.4. However, the new phx.new archive requires Elixir 1.5+. Support HTTP2 by making a small change Thanks to the release of Cowboy 2, Phoenix 1.4 supports HTTP2 with a single line change to mix.exs. One needs to simply add {:plug_cowboy, "~> 2.0"} to their deps and Phoenix will run with the Cowboy 2 adapter. New phx.gen.cert to aid local SSL development Most browsers require connections over SSL for HTTP2 requests, failure of which can cause them to fallback to HTTP 1.1 requests. To aid local development over SSL, Phoenix now includes a new phx.gen.cert task which generates a self-signed certificate for HTTPS testing in development. Faster Development Compilation The new release has improved compilation speeds have improved due to the contributions to plug and compile-time changes. New Development 404 Page Phoenix’s 404 page (in development) now lists the available routes for the originating router, for example: A new UserSocket for connection info Access to more underlying transport information when using Phoenix channels has been a highly requested feature. The 1.4 release now provides a connect/3 UserSocket callback, which can provide connection information, such as the peer IP address, host information, and X-Headers of the HTTP request for WebSocket and Long-poll transports. New  ‘Presence JavaScript API’ A new, backward compatible Presence JavaScript API has been introduced to both resolve race conditions as well as simplify the usage. Previously, multiple channel callbacks against "presence_state” and "presence_diff" events were required on the client which dispatched to Presence.syncState and Presence.syncDiff functions. Now, the interface has been unified to a single onSync callback and the presence object tracks its own channel callbacks and state. To know more about Phoenix 1.4.0 visit its official website. Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles Web Framework Behavior Tuning Beating jQuery: Making a Web Framework Worth its Weight in Code  
Read more
  • 0
  • 0
  • 2952

article-image-redbird-a-modern-reverse-proxy-for-node
Amrata Joshi
06 Nov 2018
3 min read
Save for later

Redbird, a modern reverse proxy for node

Amrata Joshi
06 Nov 2018
3 min read
The latest version, 8.0 of Redbird got released last month. It is a modern reverse proxy for node. Redbird comes with built in Cluster, HTTP2, LetsEncrypt and Docker support which helps in the handling of load balancing, dynamic virtual hosts, proxying web sockets and SSL encryption. It comes with a complete library for building dynamic reverse proxies with the speed and robustness of http-proxy. It is a light-weight package that includes everything that is needed for easy reverse routing of applications. It is useful for routing applications from different domains in one single host. It is also used for easy handling of SSL. What’s new in Redbird? Support for HTTP2: One can now enable HTTP2 easily by setting the HTTP2 flag to true. Note: HTTP2 requires SSL/TLS certificates. Support for LetsEncrypt: Redbird now supports automatic generation of SSL certificates using LetsEncrypt. While using LetsEncrypt, the obtained certificates will be copied to the specific path on disk. One should take the backup, or save them. Features  It provides flexible and easy routing It also supports websockets The users can experience seamless SSL Support. It also, automatically redirects the user from HTTP to HTTPS It enables automatic TLS certificates generation and renewal It supports load balancing after following a round-robin algorithm It helps in registering and unregistering routes programmatically without restart which allows zero downtime deployments It helps in the automatic registration of running containers by enabling docker support. It enables automatic multi-process with the help of cluster support It is based on top of rock-solid node-http-proxy. It also offers optional logging which is based on bunyan It uses node-etcd to create proxy records automatically from an etcd cluster. Cluster Support in Redbird Redbird supports automatic generation of node cluster. To use the cluster support feature one needs to specify the number of processes that one wants it to use. Redbird automatically restarts any thread that crashes and hence increases reliability. If one needs NTLM support, Redbird adds the required header handler. This then registers a response handler. This handler makes sure that the NTLM auth header is properly split into two entries from http-proxy. Custom resolvers in Redbird Redbird comes with custom resolvers that helps one to decide how the proxy server handles the request. Custom resolvers help in path-based routing, headers based routing and wildcard domain routing. The install command for Redbird is npm install redbird. To read more about this news, check out the official page of Github. Squid Proxy Server: debugging problems How to Configure Squid Proxy Server Squid Proxy Server: Fine Tuning to Achieve Better Performance  
Read more
  • 0
  • 0
  • 4118
article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 3739

article-image-introducing-howler-js-a-javascript-audio-library-with-full-cross-browser-support
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Introducing Howler.js, a Javascript audio library with full cross-browser support

Bhagyashree R
01 Nov 2018
2 min read
Developed by GoldFire Studios, Howler.js is an audio library for the modern web that makes working with audio in JavaScript easy and reliable across all platforms. It defaults to Web Audio API and falls back to HTML5 Audio to provide support for all browsers and platforms including IE9 and Cordova. Originally, it was developed for an HTML5 game engine, but it can also be used just as well for any other audio related function in web applications. Features of Howler.js Single API for all audio needs: It provides a simple and consistent API to make it easier to build audio experiences in your application. Audio sprites: For more precise playback and lower resources. you can define and control segments of files with audio sprites. Supports all codecs: It supports all codecs such as MP3, MPEG, OPUS, OGG, OGA, WAV, AAC, CAF, M4A, MP4, WEBA, WEBM, DOLBY, FLAC. Auto-caching for improved performance: It automatically caches loaded sounds that can be reused on subsequent calls for better performance and bandwidth. Modular architecture: Its modular architecture helps you to easily use and extend the library to add custom features. Which browsers does it support? Howler.js is compatible with the following: Google Chrome 7.0+ Internet Explorer 9.0+ Firefox 4.0+ Safari 5.1.4+ Mobile Safari 6.0+ Opera 12.0+ Microsoft Edge Read more about Howler.js on its official website and also check out its GitHub repository. npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 6181

article-image-react-conf-2018-highlights-hooks-concurrent-react-and-more
Bhagyashree R
30 Oct 2018
4 min read
Save for later

React Conf 2018 highlights: Hooks, Concurrent React, and more

Bhagyashree R
30 Oct 2018
4 min read
React Conf 2018 was held on October 25-26 in Henderson, Nevada USA. In this conference, the React team introduced Hooks, that allows you to use React without classes. On the second day, they spoke about time slicing, code-splitting, and introduced the React Cache and Scheduler APIs that we will see in the coming releases. Day 1: Unveiling Hooks React conf 2018 was kick-started by Sophie Alpert, Engineering Manager at Facebook. She highlighted that many big companies like Amazon, Apple, Facebook, Google are using React and that there has been a huge increase in npm downloads. React’s primary mission is to allow web developers to create great UIs. This is enabled by these three properties of React: Simplifying things that are difficult Focusing on performance Developer tooling But there are still few limitations in React that need to be addressed to achieve the mission React aims for. It doesn’t provide a stateful primitive that is simpler than class component. One of the earlier solutions to this was Mixins, but it has come to be known for introducing more problems than solving the problems. Here are the three limitations that were discussed in the talk: Reusing logic between multiple components: In React, sharing code is enabled by two mechanisms, which are higher-order components and render props. But to use them you will need to restructure your component hierarchy. Giant components: There are many cases when the components are simple but grow into an unmanageable mess of stateful logic and side effects. Also, very often we see the lifecycle methods ending up with a mix of unrelated logic. This makes it quite difficult to break these components into smaller ones because the stateful logic is all over the place. Confusing classes: Understanding classes in JavaScript is quite difficult. Classes in JavaScript work very differently from how they work in most languages. You have to remember to bind the event handlers. Also, classes make it difficult to implement hot-reloading reliably. In order to solve these problems in React, Dan Abramov introduced Hooks, followed by Ryan Florence demonstrating how to refactor an application to use them. Hooks allows you to “hook into” or use React state and other React features from function components. The biggest advantage is that Hooks don’t work inside classes and let you use React without classes. Day 2: Concurrent rendering in React On day 2 of the React Conf, Andrew Clark spoke about concurrent rendering in React. Concurrent rendering allows developers to invest less time thinking about code, and focus more on the user experience. But, what exactly is concurrent rendering? Concurrent rendering can work on multiple tasks at a time, switching between them according to their priority. With concurrent rendering, you can partially render a tree without committing the result to the DOM. It does not block the main thread and is designed to solve real-world problems commonly faced by UI developers. Concurrent rendering in React is enabled by the following: Time Slicing The basic idea of time slicing is to build a generic way to ensure that high-priority updates don't get blocked by a low-priority update. With time slicing the rendered screen is always consistent and we don’t see visual artifacts of slow rendering causing a poor user experience. These are the advantages time slicing comes with: Rendering is non-blocking Coordinate multiple updates at different priorities Prerender content in the background without slowing down visible content Code-splitting and lazy loading with lazy() and Suspense You can now render a dynamic import as a regular component with the React.lazy() function.  Currently, React.lazy only supports default exports. You can create an intermediate module to re-export a module that uses named exports. This ensures that tree-shaking keeps working and that you don’t pull in unused components. By the time a component renders, we must show some fallback content to the user, for example, a loading indicator. This is done using the Suspense component. It is a way for components to suspend rendering while they load async data. It allows you to pause any state update until the data is ready, and you can add async loading to any component deep in the tree without plumbing all the props and state through your app and hoisting the logic. The latest React 16.6 comes with these two features, that is, lazy and Suspense. Hooks was recently released with React 16.7-alpha. In the coming releases, we will see two new APIs called React Cache and Scheduler. You can watch the amazing demos by the React developers, to understand these new concepts in more detail. React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 3578
article-image-google-chrome-70-now-supports-webassembly-threads-to-build-multi-threaded-web-applications
Bhagyashree R
30 Oct 2018
2 min read
Save for later

Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications

Bhagyashree R
30 Oct 2018
2 min read
Yesterday, Google announced that Chrome 70 now supports WebAssembly threads. The WebAssembly Community Group has been working to bring the support for threads to the web and this is a step towards that effort. Google’s open source JavaScript and WebAssembly engine, V8 has implemented all the necessary support for WebAssembly threads. Why the support for WebAssembly threads is needed? Earlier, parallelism in browsers was supported with the help of web workers. The downside of web workers is that they do not share mutable data between them. Instead, they rely on message-passing for communication. On the other hand, WebAssembly threads can share the same Wasm memory. The underlying storage of shared memory is enabled by SharedArrayBuffer, a JavaScript primitive that allows sharing the contents of a single ArrayBuffer concurrently between workers. Each WebAssembly thread runs in a web worker, but their shared Wasm memory allows them to work as fast as they do on native platforms. This means that those applications which use Wasm threads are responsible for managing access to the shared memory as in any traditional threaded application. How you can try this support To test the WebAssembly module you need to turn on the experimental WebAssembly threads support in Chrome 70 onwards: First, navigate to the chrome://flags URL in your browser: Source: Google Developers Next, go to the experimental WebAssembly threads setting: Source: Google Developers Now change the setting from Default to Enabled and then restart your browser: Source: Google Developers The aforementioned steps are for development purposes. In case you are interested in testing your application out in the field, you can do that with origin trial. Original trials allow you to try experimental features with your users by obtaining a testing token that’s tied to your domain. You can read more in detail about the WebAssembly thread support in Chrome 70 on the Google Developers blog. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Testing WebAssembly modules with Jest [Tutorial] Introducing Walt: A syntax for WebAssembly text format written 100% in JavaScript and needs no LLVM/binary toolkits
Read more
  • 0
  • 0
  • 6966

article-image-mozilla-announces-webrender-the-experimental-renderer-for-servo-is-now-in-beta
Bhagyashree R
29 Oct 2018
2 min read
Save for later

Mozilla announces WebRender, the experimental renderer for Servo, is now in beta

Bhagyashree R
29 Oct 2018
2 min read
Last week, the Mozilla Gfx team announced that WebRender is now in beta. It is not yet released because of some blocking bugs. WebRender is an experimental renderer for Servo that draws web content like a modern game engine. It consists of a collection of shaders that very closely matched CSS properties. Though WebRender is known for being extremely fast, its main focus is on making rendering smoother. It basically changes the way the rendering engine works to make it more like a 3D game engine. What are the WebRender and Gecko changes? In order to save GPU memory, the sizing logic to render targets is now more efficient. It comes with improved tooling to synchronize between the WebRender and Gecko repositories. Many incremental changes towards picture caching including batch caching based on z id rather than prim index, removing PrimitiveMetadata struct, and many more. A proper support for using tiled images as clip masks. A texture corruption issue after resuming from sleep on Linux with proprietary Nvidia drivers is fixed. A flickering issue at startup on Windows is fixed. The backface-visibility bugs are fixed. The z-fighting glitch with 3D transforms is fixed. A font leak on Windows is fixed. In the future, we will see more improvements in memory usage, the interaction between blob images and scrolling, and support for WebRender in Firefox for Android. You can enable WebRender in FireFox Nightly by following these steps: In about:config set “gfx.webrender.all” to true. After configuring restart Firefox. Read the official announcement on the Mozilla Gfx team blog. Mozilla updates Firefox Focus for mobile with new features, revamped design, and Geckoview for Android Developers of Firefox Focus set to replace Android’s WebView with GeckoView Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls
Read more
  • 0
  • 0
  • 3146