Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 6213

article-image-haproxy-introduces-stick-tables-for-server-persistence-threat-detection-and-collecting-metrics
Bhagyashree R
24 Sep 2018
3 min read
Save for later

HAProxy shares how you can use stick tables for server persistence, threat detection, and collecting metrics

Bhagyashree R
24 Sep 2018
3 min read
Yesterday, HAProxy published an article discussing stick tables, an in-memory storage. Introduced in 2010, it allows you to track client activities across requests, enables server persistence, and collects real-time metrics. It is supported in both the HAProxy Community and Enterprise Edition. You can think of stick tables as a type of key-value store. The key here represents what you track across requests, such as a client IP, and the values are the counters that, for the most part, HAProxy takes care of calculating for you. What are the common use cases of stick tables? StackExchange realized that along with its core functionality, server persistence, stick tables can also be used for many other scenarios. They sponsored its developments and now stick tables have become an incredibly powerful subsystem within HAProxy. Stick tables can be used in many scenarios; however, its main uses include: Server persistence Stick tables were originally introduced to solve the problem of server persistence. HTTP requests are stateless by design because each request is executed independently, without any knowledge of the requests that were executed before it. These tables can be used to store a piece of information, such as an IP address, cookie, or range of bytes in the request body, and associate it with a server. Next time when HAProxy sees new connections using the same piece of information, it will forward the request on to the same server. This way it can help in tracking user activities between one request and add a mechanism for storing events and categorizing them by client IP or other keys. Bot detection We can use stick tables to defend against certain types of bot threats. It finds its application in preventing request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks, and many more. Collecting metrics With stick tables, you can collect metrics to understand what is going on in HAProxy, without enabling logging and having to parse the logs. In this scenario Runtime API is used, which can read and analyze stick table data from the command line, a custom script or executable program. You can visualize this data using any dashboard of your choice. You can also use the fully-loaded dashboard, which comes with HAProxy Enterprise Edition for visualizing stick table data. These were a few of the use cases where stick tables can be used. To get a clear understanding of stick tables and how they are used, check out the post by HAProxy. Update: Earlier the article said, "Yesterday (September 2018), HAProxy announced that they are introducing stick tables." This was incorrect as pointed out by a reader, stick tables have been around since 2010. The article is now updated to reflect the same.    Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] How to create a standard Java HTTP Client in ElasticSearch Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 4325

article-image-nuxt-js-2-0-released-with-a-new-scaffolding-tool-webpack-4-upgrade-and-more
Bhagyashree R
24 Sep 2018
3 min read
Save for later

Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!

Bhagyashree R
24 Sep 2018
3 min read
Last week, the Nuxt.js community announced the release of Nuxt.js 2.0 with major improvements. This release comes with a scaffolding tool, create-nuxt-app to quickly get you started with Nuxt.js development. To provide a faster boot-up and re-compilation, this release is upgraded to Webpack 4 (Legato) and Babel 7. Nuxt.js is an open source web application framework for creating Vue.js applications. You can choose between universal, static generated or single page application. What is new in Nuxt.js 2.0? Introduced features and upgradations create-nuxt-app To get you quickly started with Nuxt.js development, you can use the newly introduced create-nuxt-app tool. This tool includes all the Nuxt templates such as starter template, express templates, and so on. With create-nuxt-app you can choose between integrated server-side framework, UI frameworks, and add axios module. Introducing nuxt-start and nuxt-legacy To start Nuxt.js application in production mode nuxt-start is introduced. To support legacy build of Nuxt.js for Node.js < 8.0.0,  nuxt-legacy is added. Upgraded to Webpack 4 and Babel 7 To provide faster boot-up time and faster re-compilation, this release uses Webpack 4 (Legato) and Babel 7. ESM supported everywhere In this release, ESM is supported everywhere. You can now use export/import syntax in nuxt.config.js, serverMiddleware, and modules. Replace postcss-next with postcss-preset-env Due to the deprecation of cssnext, you have to use postcss-preset-env instead of postcss-cssnext. Use ~assets instead of ~/assets Due to css-loader upgradation, use ~assets instead of ~/assets for alias in <url> CSS data type, for example, background: url("~assets/banner.svg"). Improvements The HTML script tag in core/renderer.js is fixed to pass W3C validation. The background-color property is now replaced with background in loadingIndicator, to allow the use of images and gradients for your background in SPA mode. Due to server/client artifact isolation, external build.publicPath need to upload built content to .nuxt/dist/client directory instead of .nuxt/dist. webpackbar and consola provide a improved CLI experience and better CI compatibility. Template literals in lodash templates are disabled. Better error handling if the specified plugin isn't found. Deprecated features The vendor array isn't supported anymore. DLL support is removed because it was not stable enough. AggressiveSplittingPlugin is obsoleted, users can use optimization.splitChunks.maxSize instead. The render.gzip option is deprecated. Users can use render.compressor instead. To read more about the updates, check out Nuxt’s official announcement on Medium and also see the release notes on its GitHub repository. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly low.js, a Node.js port for embedded systems Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 5359
Visually different images

article-image-next-js-7-a-framework-for-server-rendered-react-applications-releases-with-support-for-react-context-api-and-webassembly
Savia Lobo
20 Sep 2018
4 min read
Save for later

Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly

Savia Lobo
20 Sep 2018
4 min read
Yesterday, the Next.js researchers announced that the latest version--v7-- of its React Framework is now production-ready. The Next.js 7 has had 26 canary releases and 3.4 million downloads so far. Alongwith the 7th version release, they have also launched a completely redesigned nextjs.org. This version is power-packed with faster boot and re-compilation improvements, better error reporting, static CDN support and much more. Key highlights of the Next.js 7 DX Improvements The Next.js 7 includes many significant improvements to the build and debug pipelines. With the inclusion of webpack 4, Babel 7 and improvements and optimizations on the codebase, Next.js can now boot up to 57% faster during development. Also, due to the new incremental compilation cache, any changes made by the user into the code will build 40% faster. While developing and building users will now see a better real time feedback with the help of webpackbar. Better error reporting with react-error-overlay Until now, users would render the error message and its stack trace. From this version, react-error-overlay has been used to enrich the stack trace with: Accurate error locations for both server and client errors Highlights of the source to provide context A full rich stack trace react-error-overlay makes it easy to open the text editor by just clicking on a specific code block. Upgraded compilation pipeline: Webpack 4 and Babel 7 Webpack 4 This version of Next.js is now powered by the latest webpack 4, with numerous improvements and bugfixes including: Support for .mjs source files Code splitting improvements Better tree-shaking (removal of unused code) support Another new feature is WebAssembly support. Here’s an example of how Next.js can even server-render WebAssembly. With webpack 4, a new way of extracting CSS from bundles called mini-extract-css-plugin is introduced. @zeit/next-css, @zeit/next-less, @zeit/next-sass, and @zeit/next-stylus are now powered by mini-extract-css-plugin. Babel 7 Next.js 7 now uses the stable version of Babel (Babel 7). For a full list of changes in Babel 7, head over to its release notes. Some of the main features of Babel 7 are: Typescript support, for Next.js you can use @zeit/next-typescript Fragment syntax <> support babel.config.js support overrides property to apply presets/plugins only to a subset of files or directories Standardized Dynamic Imports Starting with Next.js 7, it no longer has the default import() behavior. This means users get full import() support out of the box. This change is fully backwards-compatible as well. Making use of a dynamic component remains as simple as: import dynamic from 'next/dynamic' const MyComponent = dynamic(import('../components/my-component')) export default () => {  return <div>    <MyComponent />  </div> } Static CDN support With Next.js 7 the directory structure of .next is changed to match the url structure: https://cdn.example.com/_next/static/<buildid>/pages/index.js // mapped to: .next/static/<buildid>/pages/index.js While researchers also recommend using the proxying type of CDN, this new structure allows users of a different type of CDN to upload the .next directory to their CDN. Smaller initial HTML payload As Next.js pre-renders HTML, it wraps pages into a default structure with <html>, <head>, <body> and the JavaScript files needed to render the page. This initial payload was previously around 1.62kB. With Next.js 7 the initial HTML payload has been optimized, it is now 1.5kB, a 7.4% reduction, making your pages leaner. React Context with SSR between App and Pages Starting from Next.js 7 there is support for the new React context API between pages/_app.js and page components. Previously it was not possible to use React context in between pages on the server side. The reason for this was that webpack kept an internal module cache instead of using require.cache. The Next.js developers have written a custom webpack plugin that changes this behavior to share module instances between pages. In doing so users can not only use the new React context but also reduce Next.js's memory footprint when sharing code between pages. To know more about these and other features in detail, visit the Next.js 7 blog. low.js, a Node.js port for embedded systems Browser-based Visualization made easy with the new P5.js Deno, an attempt to fix Node.js flaws, is rewritten in Rust  
Read more
  • 0
  • 0
  • 4034

article-image-mojolicious-8-0-a-web-framework-for-perl-released-with-new-promises-and-roles
Savia Lobo
18 Sep 2018
2 min read
Save for later

Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles

Savia Lobo
18 Sep 2018
2 min read
Mojolicious, a next generation web framework for the Perl programming language announced its upgrade to the latest 8.0 version. Mojolicious 8.0 was announced at the Mojoconf in Norway held from 6th to 7th September 2018. This release is codenamed as ‘Supervillain’ and is by far the major release in Mojolicious. Mojolicious allows users to easily grow single file prototypes into well-structured MVC web applications. It is a powerful web development toolkit, that one can use for all kinds of applications, independently of the web framework. Many companies such as Alibaba Group, IBM, Logitech, Mozilla, and others rely on Mojolicious to develop new code bases. Even companies like Bugzilla are getting themselves ported to Mojolicious. The Mojolicious community has decided to make a few organizational changes, to support the continuous growth. This includes: All new development will be consolidated in a single GitHub organization. Mojolicious’ official IRC channel named say hi! that has almost 200 regulars will be moving to Freenode (#mojo on irc.freenode.net). This will make it easier for people not yet part of the Perl community to get involved. Some highlights of the Mojolicious 8.0 Promises/A+ Mojolicious 8.0 includes Promises/A+, a new module and pattern for working with event loops. A promise represents the eventual result of an asynchronous operation. Roles and subprocess The version 8.0 now includes roles, a new way to extend Mojo classes. Also, the subprocesses can now mix event loops and computationally expensive tasks. Placeholder types and Mojo::File With the placeholder types, one can avoid repetitive routes. Whereas the Mojo::File, is the brand new module for dealing with file systems. Cpanel::JSON::XS and Mojo::PG With the Cpanel::JSON::XS, users can process JSON at a much faster rate now. The Mojo::PG includes many new SQL::Abstract extensions for Postgres features. To know more about Mojolicious 8.0 in detail, visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) What’s new in Vapor 3, the popular Swift based web framework Beating jQuery: Making a Web Framework Worth its Weight in Code
Read more
  • 0
  • 0
  • 3709

article-image-cloudflares-decentralized-vision-of-the-web-interplanetary-file-system-ipfs-gateway-to-create-distributed-websites
Melisha Dsouza
18 Sep 2018
4 min read
Save for later

Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites

Melisha Dsouza
18 Sep 2018
4 min read
The Cloudflare team has introduced Cloudflare’s IPFS Gateway which will make accessing content from the InterPlanetary File System (IPFS) easy and quick without having to install and run any special software on a user’s computer. The gateway which supports new distributed web technologies is hosted at cloudflare-ipfs.com. The team asserts that this will lead to highly-reliable and security-enhanced web applications. A brief gist of IPFS When a user accesses a website from the browser, it tracks down the centralized repository for the website’s content. It then sends a request from the user’s computer to that origin server, and that server sends the content back to the user's computer. However, this centralization mechanism makes it impossible to keep content online if the origin servers rolls back the data. If the origin server faces a downtime or the site owner decides to take down the data, the content stands unavailable. On the other hand, IPFS is a distributed file system that allows users to share files that will be distributed to other computers- throughout the networked file system. This means that a user’s content is stored on all the nodes of the network and data can be safely backed up. Key Differences between IPFS and the traditional Web #1 Free caching and serving of content IPFS provides free caching and serving of content. Anyone can sign up their computer to be a node in the system and start serving data. On the flipside, the traditional web relies on big hosting providers to store content and serve it to the rest of the web. Setting up a website with these providers costs money. #2 Content addressed data Rather than location-addressed data, IPFS focuses on content addressed data. In the traditional web, when a user navigates to a website, it fetches data stored at the websites IP address. The server sends back the relevant information from that IP. With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents. When a user requests for a piece of data in IPFS, they request it by its hash .i.e  content that has a hash value of, for example, QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy Why is Cloudflare’s IPFS Gateway Important? The IPFS increases the resilience of the network. The content with a hash of-QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy could be stored on dozens of nodes. So, if one of the nodes that was storing the content goes down, the network will just look for the content on another node. In addition to resilience, there is an automatic level of security introduced in the system. If the data requested by the user was tampered with during transit, the hash value the user gets will be different than the hash that he/she had asked for. This means that the system has a built-in way of knowing whether or not content has been tampered with. Users can access any of the billions of files stored on IPFS from their browser. Using Cloudflare’s gateway, they can also build a website hosted entirely on IPFS available to users at a custom domain name. Any website connected to IPFS gateway will be provided with a free SSL certificate. IPFS is embracing a new, decentralized vision of the web. Users will be able to create static web sites- containing information that cannot be censored by governments, companies, or other organizations- that are served entirely over IPFS. To know more about this announcement, head over to Cloudflare’s official Blog. 7 reasons to choose GraphQL APIs over REST for building your APIs Laravel 5.7 released with support for email verification, improved console testing Javalin 2.0 RC3 released with major updates!
Read more
  • 0
  • 0
  • 3382
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-gatsby-2-0-a-react-based-web-and-app-generator-released-with-improved-speeds-of-upto-75
Prasad Ramesh
18 Sep 2018
2 min read
Save for later

Gatsby 2.0, a React based web and app generator, released with improved speeds of upto 75%

Prasad Ramesh
18 Sep 2018
2 min read
Gatsby.js or more commonly known as Gatsby is a React-based website and app generator. It is powered by GraphQL and is used as a static site generator. But it’s not all static, it can be viewed more like a modern front end framework. It is used for creating blogs, apps ecommerce sites, and documentation. Yesterday the second major release, Gatsby 2.0 was released. Gatsby 2.0 comes 18 months after the first major release and has the hard work of the Gatsby core team and nearly 315 contributors. This second major release is focused on performance and developer experience. The highlights are 75% reduced build times, JavaScript client runtime by shrunk by 31%. Gatsby’s core dependencies are also upgraded to their latest versions: webpack 4, Babel 7, React 16.5. Gatsby 2.0 has faster site building The focus is heavily on improving build speeds for v2 and there are significant speed improvements across various parts of the build pipeline. The improvements include: Three to four times improved server side rendering performance due to React 16 Less memory usage for server rendering pages Many speedups to JavaScript and CSS bundling with webpack 4 A pull called “hulksmash” made many small fixes to refactor slow algorithms All available cores are used for rendering server pages JavaScript client runtime reduced by 31% The core JavaScript shipped with every Gatsby site is shrunk by 31%. Less use of JavaScript means faster websites. The core JavaScript size in Gatsby 1.0 was 78.5kb and in Gatsby 2.0 it is 53.9kb, both are sizes of GZIP files. The reduced sizes are largely due to the hard work done by the libraries. The code size is decreased by 30% in React 16. That is 34.8kb from the previous 49.8kb in React 15. The routers are switched from react-router to @reach/router bringing a 25% smaller bundle of 6kb from 8kb. For a complete list of changes, visit the Gatsby Blog. To know more visit their documentation and GitHub. low.js, a Node.js port for embedded systems Browser based Visualization made easy with the new P5.js Deno, an attempt to fix Node.js flaws, is rewritten in Rust
Read more
  • 0
  • 0
  • 2307

article-image-typescript-3-1-rc-released
Sugandha Lahoti
17 Sep 2018
2 min read
Save for later

TypeScript 3.1 RC released

Sugandha Lahoti
17 Sep 2018
2 min read
Typescript 3.1 release candidate is here with a few breaking changes and a showcase of what’s there to come in Typescript 3.1. TypeScript 3.1 RC is meant to gather all feedback to ensure a smooth final release. Here are the breaking changes: Support for Mappable tuple and array types Mapping over values in a list is one of the most common patterns in programming. Typescript 3.1 RC introduces mapped object types when iterating over tuples and arrays. This means if you’re already using existing mapped types like Partial or Required from lib.d.ts, they also automatically work on tuples and arrays now. Properties on function declarations Traditionally, properties on function declarations has been done in Typescript using namespaces. They are internal modules which organize code and support the concept of value-merging, where you can add properties to classes and functions in a declarative way. But they come with their own problems: ECMAScript modules have become the preferred format for organizing new code in the broader TypeScript & JavaScript community, and namespaces are TypeScript-specific. Additionally, namespaces don’t merge with var, let, or const declarations Now, in TypeScript 3.1, for any function declaration or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. As an added bonus, this functionality in conjunction with TypeScript 3.0’s support for JSX.LibraryManagedAttributes makes migrating an untyped React codebase to TypeScript significantly easier. Vendor-specific declarations removed TypeScript 3.1 RC now generates parts of lib.d.ts (and other built-in declaration file libraries) using Web IDL files provided from the WHATWG DOM specification. This makes lib.d.ts easier to keep up-to-date. However, many vendor-specific types have been removed. Differences in narrowing functions Using the typeof foo === "function" type guard may provide different results when intersecting with union types composed of {}, Object, or unconstrained generics. You can have a look at the Typescript roadmap to get the whole picture of the release. The final release, Typescript 3.1, is expected to ship in just a few weeks. Read more about the Typescript 3.1 RC on Microsoft blog. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more. TypeScript 3.0 release candidate is here. How to install and configure TypeScript.
Read more
  • 0
  • 0
  • 1641

article-image-google-wants-web-developers-to-embrace-amp
Bhagyashree R
10 Sep 2018
5 min read
Save for later

Google wants web developers to embrace AMP. Great news for users, more work for developers.

Bhagyashree R
10 Sep 2018
5 min read
Reportedly, Google wants all the web developers to adopt the AMP approach for their websites. The AMP project was announced by Google on October 7, 2015 and AMP pages first became available to web users in February 2016. Nowadays, mobile search is getting more popular as compared to desktop search. It is important for web pages to appear in Google’s mobile search results, and this is why AMP is not optional for web publishers. Without AMP, a publisher’s articles will be extremely unlikely to appear in the Top Stories carousel on mobile search in Google. What is AMP? AMP is short for Accelerated Mobile Pages. As the name suggests, this open-source project aims to provide a straightforward way to create web pages that are compelling, smooth, and load near instantaneously for users. AMP consists of three components: AMP HTML: This component consists of regular HTML with some custom AMP properties. AMP JS: This component is responsible for fast rendering of your page. It implements all of AMP's best performance practices, manages resource loading, and provides the custom tags. AMP Cache: It is used to serve cached AMP HTML pages. It is a proxy-based content delivery network for delivering all valid AMP documents. Why are web developers annoyed with AMP? This is the part which infuriates developers, because they have to follow the rules set by Google. Developing a website in itself is a difficult job and on top of that AMP adds the extra burden of creating separate AMP versions of articles. Following are some of the rules that AMP pages need to follow: To avoid delay caused by JavaScript in page rendering, AMP only allows asynchronous JavaScript. Resources such as images, ads, or iframes should mention their size in the HTML to enable AMP to determine each element’s size and position before resources are downloaded. CSS must be inline and the upper limit for the size of inline style sheet is 50 kilobytes. All resource downloads are controlled by AMP. It optimizes downloads so that the currently most important resources are downloaded first and prefetches lazy-loaded resources. Web font optimization should be kept in mind as web fonts are super large. Google Search Console checks your AMP pages and shares feedback stating what all improvements you can make to better align it with the restrictions set by Google. It basically wants full equivalency between the regular website and the AMP versions of the pages. It is not very easy to follow these restrictive rules. Many developers feel they have to do all the work they already put in to build the normal version of the site all over again specifically for the AMP version. Instead of creating two different versions, developers would be forced to build the whole site in AMP. Why Google wants web developers to accept AMP? It's very rare to find websites that look good, have great performance, and fully follow the web standards. This becomes a huge challenge for search engines. Google's crawlers and indexers have to process a lot of junk to find and index content on the web. Website built entirely in AMP are fast to load, fast to crawl, easy to understand, and in short makes Google's life so much easier. One redditor stated in a long discussion thread, that the main problem is not “AMP” itself, but “Google treating it special” is. “The problems you're describing I believe are problems with implementation not AMP itself. The only issue I really have with AMP is actually that Google treats it special. If you treat it like a web framework where you write slightly different html and get lazy loading and tons of integrations as built in components for free, it's actually quite nice both for the user and for the programmer. The problems are that people want to put in all their normal functionality, continue trying to game SEO and ad revenue, and that Google wants to serve it themselves. If Google stopped trying to integrate AMP directly into their search results/CDN system, I'd be much more willing to support and use it. AMP itself is basically just a predefined set of web components and a limitation to not use taxing JS. You can even be partially AMP compliant and still leverage all the benefits with none of the negatives (including the fact that Google won't host it if you aren't fully compliant, I believe).” To know more on why Google wants developers to embrace AMP, read this article: Google AMP Can Go To Hell. If you are interested in reading about how AMP makes content loading quicker, check out this article: What is Google AMP and how does it work?. Like newspapers, Google algorithms are protected by the First amendment making them hard to legally regulate them Google launches a Dataset Search Engine for finding Datasets on the Internet Google Chrome’s 10th birthday brings in a new Chrome 69
Read more
  • 0
  • 0
  • 3532

article-image-github-parts-ways-with-jquery-adopts-vanilla-js-for-its-frontend
Bhagyashree R
07 Sep 2018
3 min read
Save for later

GitHub parts ways with JQuery, adopts Vanilla JS for its frontend

Bhagyashree R
07 Sep 2018
3 min read
GitHub has finally finished removing the JQuery dependency from its frontend code. This was a result of gradual decoupling from JQuery which began back in at least 2 years ago. They have chosen not to replace JQuery with yet another framework. Instead, they were able to make this transition with the help of polyfills that allowed them to use standard browser features such as, EventListener, fetch, Array.from, and more. Why GitHub chose JQuery in the beginning? Simple: GitHub started using JQuery 1.2.1 as a dependency in 2007. This enabled its web developers to create more modern and dynamic user experience. JQuery 1.2.1 allowed developers to simplify the process of DOM manipulations, define animations, and make AJAX requests. Its simple interface gave GitHub developers a base to craft extension libraries such as, pjax and facebox, which later became the building blocks for the rest of GitHub frontend. Consistent: Unlike the XMLHttpRequest interface, JQuery was consistent across browsers. GitHub in its early days chose JQuery as it allowed their small development team to quickly prototype and release new features without having to adjust code specifically for each web browser. Why they decided to remove JQuery dependency? After comparing JQuery with the rapid evolution of supported web standards in modern browsers, they observed that: CSS classname switching can be achieved using Element.classList. Visual animations can be created using CSS stylesheets without writing any JavaScript code. The addEventListeners method, which is used to attach an event handler to the document, is now stable enough for cross-platform use. $.ajax requests can be performed using the Fetch Standard. With the evolution of JavaScript, some syntactic sugar that jQuery provides has become redundant. The chaining syntax of JQuery didn’t satisfy how GitHub wanted to write code going forward. According to this announcement, this step of decoupling from jquery will allow them to: Rely on web standards more Have MDN web docs as their default documentation to refer Maintain more resilient code in future Speeding up page load times and JavaScript execution time Which technology is it using now? GitHub has moved from JQuery to vanilla JS (plain JavaScript). It is now using querySelectorAll, fetch for ajax, and delegated-events for event handling; polyfills for standard DOM manipulations, and Custom Elements. The adoption of Custom Elements is on the rise. It is a component library native to the browser, which means that users do not have to download, parse, and compile additional bytes of a framework. With the release of Web Components v1 in 2017, GitHub started to adopt Custom Elements on a wider scale. In future they are also planning to use Shadow DOM. To read more about how GitHub made this transition to using standard browser features, check out their official announcement. Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements Why Golang is the fastest growing language on GitHub
Read more
  • 0
  • 0
  • 7486
article-image-browser-based-visualization-made-easy-with-the-new-p5-js
Amarabha Banerjee
07 Sep 2018
3 min read
Save for later

Browser based Visualization made easy with the new P5.js

Amarabha Banerjee
07 Sep 2018
3 min read
Web visualization has been one of the most interesting themes to have emerged in the last 4-5 years. It allows developers to create interesting insight based apps, interactive maps business intelligence based charts & reports and then compile them right in your browser. Three.js, D3.js, Chart.js are few of the top libraries and frameworks that are most popular presently. The latest addition to this list is P5.js. P5.js is a JavaScript library that acts as a software sketchbook and allows the developers to use the whole browser as a canvas. The main goal of P5.js is to make coding accessible for artists, designers, entrepreneurs and others who want to create their own browser based visualizations with a custom touch. The inherent technology behind P5.js is Processing which is a sketch software/language for artists. To include a larger set of developers and designers into the fold, P5.js has incorporated coding in JavaScript. You can use it with DOM and hence this is as developer friendly as it is easily accessible by artists. P5.js has also add-on libraries that make it easier to interact with other HTML5 objects including text, video, webcam, sound etc. You can get started with P5.js by downloading the complete set-up file or the minified version of it from the official P5.js page. You can also start from one of the online versions of P5.js stored in CDN. It comes packed with the Sublime text code editor by default but you can use any code editor of your choice. Other good editor options include Brackets, Atom and OpenProcessing. If you are not using the p5 web editor, then Notepad++ or Eclipse might be good choices for you. P5.js comes with an option to customize the mouse and touch options while you are drawing. Unless a particular touch behavior is declared, the mouse touch assumes the touch of a mobile device which is intuitive and practical. P5.js also allows for Asynchronous JavaScript calls and functions. Loading images, external files, and URLs are generally handled by async functions which makes the overall process faster. There are a few variables and functions that make browser interaction easier, many more to come! windowWidth / windowHeight displayWidth / displayHeight winMouseX / winMouseY fullscreen() Any native JS function can be used easily with you p5.js sketch. One of the core ideas behind p5.js is that your sketch is not just the graphics canvas but the you can draw using the complete length and breadth of your browser. For this reason, there is the p5.dom library that makes it easy to interact with other HTML5 objects, including text, hyperlink, image, input, video, audio, and webcam. There is also a p5.sound library that provides a friendly interface to HTML5 web audio API for loading, playing, and synthesizing sounds. 8 ways to improve your data visualizations Getting started with Data Visualization in Tableau What is Seaborn and why should you use it for data visualization?
Read more
  • 0
  • 0
  • 4136

article-image-react-16-5-0-is-now-out-with-a-new-package-for-scheduling-support-for-devtools-and-more
Bhagyashree R
07 Sep 2018
3 min read
Save for later

React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!

Bhagyashree R
07 Sep 2018
3 min read
React announced its monthly release yesterday, React 16.5.0. In this release they have improved warning messages, added support for React DevTools Profiler in React DOM, and done some bug fixes. Updates in React A Dev warning is shown if React.forwardRef render function doesn't take exactly two arguments. A more improved message is shown if someone passes an element to createElement by mistake. The onRender function will be called after mutations and commitTime reflects pre-mutation time. Updates in React DOM New additions Support for React DevTools Profiler is added. The react-dom/profiling entry point is added for profiling in production. The onAuxClick event is added for browsers that support it. The movementX and movementY fields are added to mouse events. The tangentialPressure and twist fields are added to pointer events. Support for passing booleans to the focusable SVG attribute. Improvements Improved component stack for the folder/index.js naming convention. Improved warning when using getDerivedStateFromProps without initialized state. Improved invalid textarea usage warning. Electrons <webview> tag are now allowed without warnings. Bug fixes Fixed incorrect data in compositionend event when typing Korean on IE11. Avoid setting empty values on submit and reset buttons. The onSelect event not being triggered after drag and drop. The onClick event not working inside a portal on iOS. A performance issue when thousands of roots are re-rendered. gridArea will be treated as a unitless CSS property. The checked attribute is not getting initially set on the input. A crash when using dynamic children in the option tag. Updates in React DOM Server A crash is fixed that happens during server render in react 16.4.1 Fixes a crash when setTimeout is missing This release fixes a crash with nullish children when using dangerouslySetInnerHtml in a selected option. Updates in React Test Renderer and Test Utils A Jest-specific ReactTestUtils.mockComponent() helper is now deprecated. A warning is shown when a React DOM portal is passed to ReactTestRenderer. Improvements in TestUtils error messages for bad first argument. Updates in React ART Support for DevTools is added New package for scheduling (experimental) The ReactDOMFrameScheduling module will be pulled out in a separate package for cooperatively scheduling work in a browser environment. It's used by React internally, but its public API is not finalized yet. To see the complete list of updates in React 16.5.0, head over to their GitHub repository. React Next React Native 0.57 coming soon with new iOS WebViews Implementing React Component Lifecycle methods [Tutorial] Understanding functional reactive programming in Scala [Tutorial]
Read more
  • 0
  • 0
  • 4491

article-image-tor-browser-8-0-powered-by-firefox-60-esr-released
Melisha Dsouza
07 Sep 2018
3 min read
Save for later

Tor Browser 8.0 powered by Firefox 60 ESR released

Melisha Dsouza
07 Sep 2018
3 min read
The Tor Project team has released Tor Browser 8.0 today. The update comes with an upgraded language page, new onboarding experience for new users, additional language support and optimized bridge fetching technique. The Tor Browser, based on Mozilla's Extended Support Release version of the Firefox web browser, helps users anonymize their Internet connection. The browser is famous for bundling data into encrypted packets before passing them through the network, thus keeping user’s identity at bay. This new version powered by Firefox 60 ESR (Extended Support Release) is a level up from the previous Firefox 52 ESR. 3 major upgrades in Tor Browser 8.0 #1 A New Onboarding Experience It is now really easy for new users to understand what the Tor browser is and how to use it.  The welcome tour provides users with all the information needed to get started with the Tor browser. The ‘About’ section of the browser takes viewers through aspects that make Tor different than other commonly available browsers. Users are also taken through privacy and security settings to ensure that they have a smooth experience using the browser. Source: ghacks.net #2 Optimized Bridge Configuration Flow Bridge Fetching, has been optimized in the new version. In the previous versions, users had to send an email or visit a website to request new bridges for locations where Tor browser is blocked because of censorship related issues. With the Tor 8.0, users have to only  solve a captcha in Tor launcher toto request new bridges from within the browser directly. All that has to be done is- Activate the Tor button in the browser interface and select Tor Network Settings. Enable the "Tor is censored in my country" checkbox on the page that opens. Select "Request a bridge from torproject.org". Solve the captcha displayed. Source:ghacks.net #3 Improved Language Support Previous versions of Tor supported fewer languages, which meant that users were unable to use the browser in their native language. The Tor Browser 8.0 has introduced the support for nine languages - Catalan, Irish, Indonesian, Icelandic, Norwegian, Danish, Hebrew, Swedish, and Traditional Chinese. The browser has added Component and library upgrades to new versions while Blocking navigator.mozAddonManager so that websites can't see it. You can read the full release announcement for more information on the upgrades introduced in Tor 8.0. Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge Splinter 0.9.0, the popular web app testing tool, released!  
Read more
  • 0
  • 0
  • 3537
article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 3885

article-image-google-chromes-10th-birthday-brings-in-a-new-chrome-69
Savia Lobo
06 Sep 2018
4 min read
Save for later

Google Chrome's 10th birthday brings in a new Chrome 69

Savia Lobo
06 Sep 2018
4 min read
On 2nd September 2018 Google Chrome celebrated its 10th birthday. Google dressed up Chrome with a new look by launching Chrome 69. However, due to the long Labor Day weekend in the U.S. this year, it posted and updated about Chrome’s new look a few days later on 4th September 2018. With more than 1 billion active users, Google Chrome has become the most-used and the most-loved platform for normal users as well as web developers. With Chrome’s new version releases that take place every six weeks, developers can stay on top of everything available, as well as all the features that have been deprecated or removed. The release also includes powerful graphics, a more powerful Omnibox, an updated password manager, improvised autofill, and a lot of developer-specific and security improvements. What’s new in Chrome 69? A brand new UI for a seamless experience Chrome’s 69’s look can now be seen across all platforms (desktop, Android, and iOS). Users will see more rounded shapes, new icons, and a new color palette. For desktop, Google also changed the shape of its tabs making the web icons easily visible, thus resulting in an easier navigation across tabs. For mobile, Chrome 69 includes a number of changes for faster browsing. On iOS, the toolbar is moved to the bottom for an easy access. Across Chrome, Google has simplified the prompts, menus, and even the URLs in the user’s address bar. A lightning fast experience A lot of online activities such as booking travel tickets and appointments, shopping, and working through the to-do lists across multiple sites at once have been one of the features of Chrome. Following this, Google wants to make this experience a lot easier and safe in its new and updated Chrome 69. Now, Chrome can more accurately fill in user passwords, addresses, and credit card numbers, for an easy checkout through online forms. All this information is saved on user’s Google account, and can also be accessed directly from the Chrome toolbar. Staying secure on the web means using strong and unique passwords for every different site. When it’s time to create a new password, Chrome will now generate one for the user. No more using pet names or birth dates, that can easily be cracked. Chrome will save the password, and next time the user signs in, it’ll be there, on both the laptop and phone. Chrome 69 now with a smarter search bar The Omnibox, which is placed on top of chrome, combines both the search bar and address bar into one. In this new Chrome version, Google has made Omnibox much faster, and much smarter. The Omnibox will now show the answers directly in the address bar without having to open a new tab—from rich results on public figures or sporting events, to instant answers like the local weather via weather.com or a translation of a foreign word. The plus point of this Omnibox is if two dozen tabs open across three browser windows: Users can search for a website in the Omnibox and Chrome will tell if it’s already open and allows the user to jump straight to it with “Switch to tab.” Also, the users will also be able to search files from their Google Drive directly in their Omnibox too. A personalized Chrome for everyone! This new version can now be easily personalized as per user convenience. Users can now create and manage shortcuts to their favorite websites directly from the new tab page. All they have to do is, simply open a new tab and ‘Add shortcut’. They can also customize the background of a newly-opened tab with their favorite photograph. Other plans by Google, for Chrome Google has launched several other features for user privacy and safety including: an ad blocker to keep users safe from malicious and annoying ads. Moved the web to HTTPS to keep you secure online. Launched site isolation which provides deeper defense against many types of attacks including Spectre, and brought VR and AR browsing to Chrome. Google further plans to roll out a set of new experiments to improve Chrome’s startup time, latency, usage of memory, and usability. A new CSS features to improve performance tracking ability has been rolled out just for Chrome’s developer community. Here’s a short video explaining all the new features in the new Chrome 69. https://youtu.be/WF2IjH35w8o Chromebots: Increasing Accessibility for New Makers Google announces Chrome 67 packed with powerful APIs, password-free logins, PWA support, and more
Read more
  • 0
  • 0
  • 2723