Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-all-about-browser-fingerprinting-the-privacy-nightmare-that-keeps-web-developers-awake-at-night
Bhagyashree R
08 May 2019
4 min read
Save for later

All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night

Bhagyashree R
08 May 2019
4 min read
Last week, researchers published a paper titled Browser Fingerprinting: A survey, that gives a detailed insight into what browser fingerprinting is and how it is being used in the research field and the industry. The paper further discusses the current state of browser fingerprinting and the challenges surrounding it. What is browser fingerprinting? Browser fingerprinting refers to the technique of collecting various device-specific information through a web browser to build a device fingerprint for better identification. The device-specific information may include details like your operating system, active plugins, timezone, language, screen resolution, and various other active settings. This information can be collected through a simple script running inside a browser. A server can also collect a wide variety of information from public interfaces and HTTP headers. This is a completely stateless technique as it does not require storing any collected information inside the browser. The following table shows an example of a browser fingerprint: Source: arXiv.org The history of browser fingerprinting Back in 2009, Jonathan Mayer, who works as an Assistant Professor in the Computer Science Department at Princeton University, investigated if the differences in browsing environments can be exploited to deanonymize users. In his experiment, he collected the content of the navigator, screen, navigator.plugins, and navigator.mimeTypes objects of browsers. The results drawn from his experiment showed that from a total of 1328 clients, 1278 (96.23%) could be uniquely identified. Following this experiment, in 2010, Peter Eckersley from the Electronic Frontier Foundation (EFF) performed the Panopticlick experiment in which he investigated the real-world effectiveness of browser fingerprinting. For this experiment, he collected 470,161 fingerprints in the span of two weeks. This huge amount of data was collected from HTTP headers, JavaScript, and plugins like Flash or Java. He concluded that browser fingerprinting can be used to uniquely identify 83.6% of the device fingerprints he collected. This percentage shot up to 94.2% if users had enabled Flash or Java as these plugins provided additional device information. This is the study that proved that individuals can really be identified through these details and the term “browser fingerprinting was coined”. Applications of Browser fingerprinting As is the case with any technology, browser fingerprinting can be used for both negative and positive applications. By collecting the browser fingerprints, one can track users without their consent or attack their device by identifying a vulnerability. Since these tracking scripts are silent and executed in the background users will have no clue that they are being tracked. Talking about the positive applications, with browser fingerprinting, users can be warned beforehand if their device is out of date by recommending specific updates. This technique can be used to fight against online fraud by verifying the actual content of a fingerprint. “As there are many dependencies between collected attributes, it is possible to check if a fingerprint has been tampered with or if it matches the device it is supposedly belonging to,” reads the paper. It can also be used for web authentication by verifying if the device is genuine or not. Preventing unwanted tracking by Browser fingerprinting By modifying the content of fingerprints: To prevent third-parties from identifying individuals through fingerprints, we can send random or pre-defined values instead of the real ones. As third-parties rely on fingerprint stability to link fingerprints to a single device, these unstable fingerprints will make it difficult for them to identify devices on the web. Switching browsers: A device fingerprint is mainly composed of browser-specific information. So, users can use two different browsers, which will result in two different device fingerprints. This will make it difficult for a third-party to track the browsing pattern of a user. Presenting the same fingerprint for all users: If all the devices on the web present the same fingerprint, there will no advantage of tracking the devices. This is the approach that the Tor Browser uses, which is known as the Tor Browser Bundle (TBB). Reducing the surface of browser APIs: Another defense mechanism is decreasing the surface of browser APIs and reducing the quantity of information a tracking script can collect. This can be done by disabling plugins so that there are no additional fingerprinting vectors like Flash or Silverlight to leak extra device information. Read the full paper, to know more in detail. DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust
Read more
  • 0
  • 0
  • 3989

article-image-angular-6-is-here-packed-with-exciting-new-features
Sugandha Lahoti
04 May 2018
4 min read
Save for later

Angular 6 is here packed with exciting new features!

Sugandha Lahoti
04 May 2018
4 min read
Angular 6 finally arrives! This is a major production release of Angular, the popular JavaScript framework for building web and mobile applications. This release mainly focuses on the toolchain and on making it easier for developers to migrate to future versions of Angular quickly. With this release, major framework packages (@angular/core, @angular/common, @angular/compiler, etc), the Angular CLI, and Angular Material + CDK are also synchronizing their releases. All are releasing as 6.0.0 today. Here’s a quick rundown of all major features: New CLI commands Two new CLI commands have been added. The ng-update command recommends updates to an application by analyzing the package.json. ng-update will help developers adopt the right version of dependencies while keeping them in sync. The ng-add CLI command adds new capabilities to a project by using the package manager to download new dependencies and invoke an installation script. ng add @angular/pwa —Converts your app into a PWA by adding an app manifest and service worker ng add @ng-bootstrap/schematics — Adds ng-bootstrap to your application ng add @angular/material — Install and setup Angular Material and theming and register new starter components into ng generate CLI Workspaces CLI v6, which is a part of Angular 6 release, now supports workspaces containing multiple projects, such as multiple applications or libraries. CLI projects will now use angular.json instead of .angular-cli.json for build and project configuration. It also adds support for creating and building libraries with the command ng generate library <name>. This command will create a library project within the CLI workspace, and configure it for testing and building. Angular Elements Angular 6 also comes with the first release of Angular Elements. Angular elements allow bootstrapping Angular components within an existing Angular application by registering them as Custom Elements. They replace the need to manually bootstrap Angular components found in static html content. Angular Material + CDK Components Angular 6 features a new tree component for displaying hierarchical data. The Tree component in Angular Material and the Component Dev Kit helps in better visualization of tree structures such as a list of files. Alongside the tree, there are new badge and bottom-sheet components. Badges help display small bits of helpful information, such as unread item counts. Bottom-sheets are a special type of mobile-centric dialogs, commonly used to present a list of options following an action. With the release of v6, the @angular/cdk/overlay package includes new positioning logic that helps make pop-ups which remain on-screen in all situations. The angular material also includes 3 new starter components. Material Sidenav: Generates a starter component including a toolbar with the app name and the side navigation. Material Dashboard: Generates a starter dashboard component containing a dynamic grid list of cards. Material Data Table: Generates a starter data table component that is pre-configured with a datasource for sorting and pagination. Updated to use RxJS v6 Angular has been updated to use RxJS v6. RxJS v6 was introduced at ng-conf and brings several major changes, along with a backwards compatibility package rxjs-compat for keeping applications working without breaking components. Long Term Support Expansion The angular community has extended the long-term support to all major releases starting with v4. Each major release will be supported for 18 months with around 6 months of active development followed by 12 months of critical bug fixes and security patches. A common complaint among developers about Angular has been about the messy migrations from one version to another. This announcement aims to make updating from one major to the next easier, and give bigger projects more time to plan updates. How can you upgrade to the new version? The update will take advantage of the new ng update tool. Here are the steps for updating. Update @angular/cli Update your Angular framework packages Update other dependencies Checkout the Angular blog for detailed release notes and steps on how to update. ng-conf 2018 highlights, the popular angular conference Why switch to Angular for web development – Interview with Minko Gechev 8 built-in Angular Pipes in Angular 4 that you should know  
Read more
  • 0
  • 0
  • 3987

article-image-d3-5-0-is-out
Sugandha Lahoti
03 Apr 2018
2 min read
Save for later

D3 5.0 is out!

Sugandha Lahoti
03 Apr 2018
2 min read
D3.js, the popular javascript library is now available in version 5.0. This version D3 5.0, introduces only a few non-backward-compatible changes. D3.js is a JavaScript library for manipulating documents based on data. D3 combines powerful visualization components and a data-driven approach to DOM manipulation. It helps bring data to life using HTML, SVG, and CSS without restriction to a proprietary framework. Here are the most notable changes available in D3 5.0: D3 5.0 now uses Promises instead of asynchronous callbacks to load data. Promises simplify the structure of asynchronous code, especially in modern browsers that support async and await. D3 now also uses the Fetch API instead of XMLHttpRequest where the d3-request module has been replaced by d3-fetch. D3 5.0 also deprecates and removes the d3-queue module. Developers can use Promise.all to run a batch of asynchronous tasks in parallel, or a helper library such as p-queue to control concurrency. D3 no longer provides the d3.schemeCategory20 categorical color schemes. It now includes d3-scale-chromatic, which implements excellent schemes from ColorBrewer, including categorical, diverging, sequential single-hue and sequential multi-hue schemes. It also provides implementations of marching squares and density estimation via d3-contour. There are two new d3-selection methods: selection.clone for inserting clones of the selected nodes, and d3.create for creating detached elements. In addition, D3’s package.json no longer pins exact versions of the dependent D3 modules. This fixes an issue with duplicate installs of D3 modules. As a developer you can be assured that the API has been very stable since the release of 4.0. The only significant breakage will be in adopting modern asynchronous patterns i.e. promises and Fetch. You can download the latest version from d3.zip. The latest release can also be linked directly by copying this snippet: <script src="https://d3js.org/d3.v5.min.js"></script> The list of entire changes and code files are available in the release notes.
Read more
  • 0
  • 0
  • 3987
Visually different images

article-image-everything-new-in-angular-6-angular-elements-cli-commands-and-more
Guest Contributor
05 Jun 2018
6 min read
Save for later

Everything new in Angular 6: Angular Elements, CLI commands and more

Guest Contributor
05 Jun 2018
6 min read
Angular started as a simple frontend library. Today it has transformed in a complete framework as simply ‘Angular’ with continuous version progression from 2 to the recent 6. This progression added some amazing features to Angular, making the overall development process easier. Angular 6, is the latest version, is packed with exciting new features for all of the Angular community. In this article we are going to cover some amazing features which are out with Angular 6. So let’s get started! Angular Elements Consider a search component that we would like to have for a specific Angular application. It can be visualized as follows. In above application the search component uses the input ‘bat’ to fetch the results on the basis of its text similarity. A class named `SearchComponent` must be working beneath the app. With the advent of Angular 6, we can wrap such Angular components into custom elements. Such elements are nothing but DOM elements; in our case a combination of textbox and divs with a composition of javascript function. These elements once segregated can be used independently irrespective of any other frontend libraries like react.js, view or simple jquery. The custom elements are a new way to set the component individually out of the ng framework and use it independently. Ivy: Support for new Angular engine version 6 onwards Angular 6 will introduce us (in the near future) to a new Ivy engine that contributes to great performance and the decrease in load time of an application. Here are some important features of Ivy you need to know. Shaking Tree It is an optimization step that makes sure that unused code is not present in your build bundle. The tree shaking compilation is often used while executing `ng build` command to generate the build. New to what is a build or a bundle? A build or a bundle is a ready-to-go-live set of files that needs to be deployed on the production environment. Let’s  say a frontend project will be needing the following files in a bundle : In your Angular project there might be a component included but is not required. Assume, it falls under a specific if-condition and is not at all executed. The normal dead code elimination tools using static analysis work by retaining the symbols/characters of the reference already present in the unbundled code. Hence the component that was conditionally not at all used, unfortunately remains inside the bundle. The new rendering mechanism Render 2 is built to solve such issues. Now we can specify configuration through instruction based rendering technique. This may include only things that are required which in turn minimizes the size of builds bundles to the great extent. The new Ivy engine seems cool! New cli commands With upgradation to Angular 6, the ng cli package provides two new commands. ng add As its name suggests, the ‘ng add’ command provides you the capability to add a new module/package to your current application. This may be rxjs, material UI libraries etc. Don’t get confused, it doesn’t install the package but simply adds one to your project whenever required. So if you are planning to add a third party library to your Angular app make sure you install it using npm, and then add it using ng add. The automatic addition of such modules helps reduce development time by avoiding errors while adding up a module. ng update The new Angular version 6 cli has the most awaited ‘ng update’ command. This command when run, yields a command line that provides a list of packages that need to be updated over time. In case they are already updated, the command just provides a confirmation that everything is in order. Upgrading to ng 6 A fresh Angular 6 installation is not a problem. You can always follow https://update.Angular.io/ for incorporating changes with respect to updates. Here are a few set of things to do if you are planning to upgrade in your current project. Node.js version 8.9+ Update your Angular configuration //Globally npm i -g @Angular/cli //locally npm i @Angular/cli Once the Angular cli has its latest code, the ng update command is available for use. So let us use it for updating the packages under Angular/cli as follows npm update @Angular/cli Update the Angular/core packages using ng update as follows ng update @Angular/core Angular has rxjs for handling asynchronousity in the application. This library also needs to be updated to rxjs 6. Here is the link for the detailed updation process Update Angular material library that provides beautiful UI components ng update @Angular/material Finally run `ng serve` and test the new setup Besides all the amazing features listed above, Angular 6 provides support to rxJS6, Typescript 2.7 with conditional type declarations and not to forget the service-workers package in Angular’s core. At the time of Angular 6 launch, there were small break points with respect to command line commands like ng updates which are fixed by now and stable. The Angular team is already working towards some more incredible features like new ng-compiler engine, @aiStore (an AI powered solutions store), @mine package for bitcoins and much more in Angular 7. Over the years, the Angular team has continued to provide dedicated support to evolve the project into one of the  best that technology has to offer. With such tenacity, looks like the whole Angular ecosystem is poised to scale even greater heights than before. I, for one, can’t wait to see what they do next in Angular! [author title="Author Bio"] Erina is an assistant professor in the computer science department of Thakur college, Mumbai. Her enthusiasm in web technologies inspires her to contribute to freelance JavaScript projects, especially on Node.js. Her research topics were SDN and IoT, which according to her create amazing solutions for various web technologies when used together. Nowadays, she focuses on blockchain and enjoys fiddling with its concepts in JavaScript.[/author] Why switch to Angular for web development – Interview with Minko Gechev ng-conf 2018 highlights, the popular angular conference Getting started with Angular CLI and build your first Angular Component
Read more
  • 0
  • 0
  • 3978

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 3947

article-image-github-plans-to-deprecate-github-services-and-move-to-webhooks-in-2019
Savia Lobo
11 Dec 2018
3 min read
Save for later

GitHub plans to deprecate GitHub services and move to Webhooks in 2019

Savia Lobo
11 Dec 2018
3 min read
On April 25, this year, GitHub announced that it will be shutting down GitHub Services in order to focus on other areas of the API, such as strengthening GitHub Apps and GraphQL, and improving webhooks. According to GitHub, Webhooks are much easier for both users and GitHub staff to debug on the web because of improved logging. GitHub Services has not supported new features since April 25, 2016, and they have also officially deprecated it on October 1st, 2018. The community stated that this functionality will be removed from GitHub.com on January 31st, 2019. The main intention of GitHub Services was to allow third-party developers to submit code for integrating with their services, but this functionality has been superseded by GitHub Apps and webhooks. Since October 1st, 2018, users are denied from adding GitHub services to any repository on GitHub.com, via the UI or API. Users can, however, continue to edit or delete existing GitHub Services. GitHub services vs. webhooks The key differences between GitHub Services and webhooks include: Configuration: GitHub Services have service-specific configuration options, while webhooks are simply configured by specifying a URL and a set of events. Custom logic: GitHub Services can have custom logic to respond with multiple actions as part of processing a single event, while webhooks have no custom logic. Types of requests: GitHub Services can make HTTP and non-HTTP requests, while webhooks can make HTTP requests only. Brownout for GitHub Services During the week of November 5th, 2018, there was a week-long brownout for GitHub Services. Any GitHub Service installed on a repository did not receive any payloads. Normal GitHub Services operations were resumed at the conclusion of the brownout. The main motivation behind the brownout was to allow GitHub users and integrators to see the places that GitHub Services are still being used and begin working towards migrating away from GitHub Services. However, they decided that a week-long brownout would be too disruptive for everyone. Instead, they plan to do a gradual increase in brownouts until the final blackout date of January 31st, 2019. The community announced that on January 31, 2019, they will permanently stop delivering all installed services' events on GitHub.com. As per the updated deprecation timeline: On December 12th, 2018, GitHub Service deliveries will be suspended for a full 24 hours. On January 7th, 2019, GitHub Services will be suspended for a full 7 days. Following that, regular deliveries will resume January 14th, 2019. Users should ensure that their repositories use newer APIs available for handling events. The following changes have taken place since October 1st, 2018: The "Create a hook" endpoint that accepted a required argument called name, which can be set to web for webhooks, or the name of any valid service. Starting October 1st, this endpoint does not require a name to be provided; if it is, it will only accept web as a valid value. Stricter API validation was enforced on November 1st. The name is no longer necessary as a required argument, and requests sending this value are rejected. To learn more about this deprecation, check out Replacing GitHub Services. GitHub introduces Content Attachments API (beta) Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 3924
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-apple-shares-tentative-goals-for-webkit-2020
Sugandha Lahoti
11 Nov 2019
3 min read
Save for later

Apple shares tentative goals for WebKit 2020

Sugandha Lahoti
11 Nov 2019
3 min read
Apple has released a list of tentative goals for WebKit in 2020 catering to WebKit users as well as Web, Native, and WebKit Developers. These features are tentative and there is no guarantee if these updates will ship at all. Before releasing new features, Apple looks at a number of factors that are arranged according to a plan or system. They look at developer interests and harmful aspects associated with a feature. Sometimes they also take feedback/suggestions from high-value websites. WebKit 2020 enhancements for WebKit users Primarily, WebKit is focused on improving performance as well as privacy and security. Some performance ideas suggested include Media query change handling, No sync IPC for cookies, Fast for-of iteration, Turbo DFG, Async gestures, Fast scrolling on macOS, Global GC, and Service Worker declarative routing. For privacy, Apple is focusing on improving Address ITP bypasses, logged in API, in-app browser privacy, and PCM with fraud prevention. They are also working on improving Authentication, Network Security, JavaScript Hardening, WebCore Hardening, and Sandbox Hardening. Improvements in WebKit 2020 for Web Developers For web platforms, the focus is on three qualities - Catchup, Innovation, and Quality. Apple is planning to bring improvements in Graphics and Animations (CSS overscroll-behavior, WebGL 2, Web Animations), Media (Media Session Standard MediaStream Recording, Picture-in-Picture API) and DOM, JavaScript, and Text. They are also looking to improve CSS Shadow Parts, Stylable pieces, JS builtin modules, Undo Web API and also work on WPT (Web Platform Tests). Changes suggested for Native Developers For Native Developers in the obsolete legacy WebKit, the following changes are suggested: WKWebView API needed for migration Fix cookie flakiness due to multiple process pools WKWebView APIs for Media Enhancements for WebKit Developers The focus is on improving Architecture health and service & tools. Changes suggested are: Define “intent to implement” style process Faster Builds (finish unified builds) Next-gen layout for line layout Regression Test Debt repayment IOSurface in Simulator EWS (Early Warning System) Improvements Buildbot 2.0 WebKit on GitHub as a project (year 1 of a multi-year project) On Hacker News, this topic was widely discussed with people pointing out what they want to see in WebKit. “Two WebKit goals I'd like to see for 2020: (1) Allow non-WebKit browsers on iOS (start outperforming your competition instead of merely banning your competition), and (2) Make iOS the best platform for powerful web apps instead of the worst, the leader instead of the spoiler.” Another pointed, “It would be great if SVG rendering, used for diagrams, was of equal quality to Firefox.” One said, “WebKit and the Safari browsers by extension should have full and proper support for Service Workers and PWAs on par with other browsers.” For a full list of updates, please see the WebKit Wiki page. Apple introduces Swift Numerics to support numerical computing in Swift Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications
Read more
  • 0
  • 0
  • 3891

article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 3885

article-image-the-angular-7-2-1-cli-release-fixes-a-webpack-dev-server-vulnerability-supports-typescript-3-2-and-angular-7-2-0-rc-0
Bhagyashree R
10 Jan 2019
2 min read
Save for later

The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0

Bhagyashree R
10 Jan 2019
2 min read
Today, Minko Gechev, an engineer in the Angular team at Google announced the release of Angular CLI 7.2.1. This release fixes a webpack-dev-server vulnerability and also comes with support for multiselect list prompt, TypeScript 3.2, and Angular 7.2.0-rc.0. https://twitter.com/mgechev/status/1083133079579897856 Understanding the webpack-dev-server vulnerability The npm install command was showing the Missing Origin Validation vulnerability because webpack-dev-server versions before 3.1.10 are missing origin validation on the websocket server. A remote attacker can take advantage of this vulnerability to steal a developer’s code as the origin of requests to the websocket server, which is used for Hot Module Replacement (HMR) are not validated. Other updates in Angular 7.2.1 CLI Several updates and bug fixes were listed in the release notes of Angular CLI’s GitHub repository. Some of them are: Support is added for multiselect list prompt Support is added for TypeScript 3.2 and Angular 7.2.0-rc.0 Optimization options are updated Warnings are added for overriding flags in arguments lintFix is added to several other schematics `resourcesOutputPath` is added to the schema to define where style resources will be placed, relative to outputPath. The architect command project parsing is improved Prompt support is added using Inquirer Jobs API is added Directly loading component templates is supported Angular 7 is now stable Unit testing Angular components and classes [Tutorial] Setting up Jasmine for Unit Testing in Angular [Tutorial]
Read more
  • 0
  • 0
  • 3878

article-image-introducing-kweb-a-kotlin-library-for-building-rich-web-applications
Bhagyashree R
10 Dec 2018
2 min read
Save for later

Introducing Kweb: A Kotlin library for building rich web applications

Bhagyashree R
10 Dec 2018
2 min read
Kweb is a library using which you can easily build web applications in the Kotlin programming language. It basically eliminates the separation between browser and server from the programmer’s perspective. This means that events that only manipulate the DOM don't need to do a server-roundtrip. As Kweb is written in Kotlin, users should have some familiarity with the Kotlin and Java ecosystem. Kweb allows you to keep all of the business logic in the server-side and enables the communication with the web browser through efficient websockets. To efficiently handle asynchronicity, it takes advantage of Kotlin’s powerful new coroutines mechanism. It also allows keeping consistent state across client and server by seamlessly conveying events between both. What are the features of Kweb? Makes the barrier between the web server and web browser mostly invisible to the programmer. Minimizes the server-browser chatter and browser rendering overhead. Supports integration with some powerful JavaScript libraries like Semantic, which is a UI framework designed for theming. Allows binding DOM elements in the browser directly to state on the server and automatically update them through the observer and data mapper patterns. Seamlessly integrates with Shoebox, a Kotlin library for persistent data storage that supports views and the observer pattern. Easily add to an existing project. Instantly update your web browser in response to code changes. The Kweb library is distributed via JitPack, a novel package repository for JVM and Android projects. Kweb takes advantage of the fact that in most web apps, logic occurs in the server side and the client can’t be trusted. This library is in its infancy but still works well enough to demonstrate that the approach is practical. You can read more about Kweb on its official website. Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more Kotlin 1.3 released with stable coroutines, multiplatform projects and more KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta
Read more
  • 0
  • 0
  • 3874
article-image-introducing-dpage-a-web-builder-to-build-web-pages-on-the-blockstack-decentralized-internet
Prasad Ramesh
19 Dec 2018
3 min read
Save for later

Introducing DPAGE, a web builder to build web pages on the Blockstack decentralized internet

Prasad Ramesh
19 Dec 2018
3 min read
DPAGE is a web page builder which developers can use to get simple web pages up and running on Blockstack's decentralized internet. DPAGE is built on top of Blockstack, an infrastructure in which you can build decentralized blockchain applications. You need a Blockstack account to log into and start using DPAGE. The Blockstack ID used to log in is stored on the blockchain. All user data is stored on a Gaia node which you can choose. This decentralized setup gives users several advantages over a conventional centralized app: Your data is yours: After using DPAGE, if you don't like it then you can create your own app. Alternatively, you can use any other web page builder and all your data will be with you and not owned by any web page/app. Users are not restricted by any vendor lock-ins. A Blockstack ID is virtually impossible to block unlike centralized identities. Google or Facebook IDs can be blocked by companies. All private user data is encrypted end-to-end. Which means that no one else can read it including the DPAGE creators. The data stored is not stored with DPAGE The profile details and user data are stored on a Blockstack’s Gaia storage hub by default. DPAGE itself doesn't store any user data on its servers. You can also run your own storage hub on a server of choice. They store data with Blockstack and they store it on ‘personal data lockers built on Google, AWS, and Azure’. It is safer than some centralized web pages As all private data is encrypted, it's more difficult for hackers to steal user data from the decentralized app. There is no central database that contains all the data, so hackers also have less incentive to hack into DPAGE. However, DDOS attacks are a possibility if the hackers target a specific Gaia hub. There is no user-specific tracking DPAGE only collects non-identifiable analytics of the users for improving the service. The service itself doesn't store or read private pages. There are some positive reactions on Hacker news: “This indeed a seriously cool product, hope more people realize it.” Another comment says: “Nice, I think this is what the web needs, a Unix approach so tools can be built on top and hosts are interchangeable.” To check out DPAGE, visit their website. The decentralized web – Trick or Treat? Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group
Read more
  • 0
  • 0
  • 3871

article-image-mozilla-shares-why-firefox-63-supports-web-components
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Mozilla shares why Firefox 63 supports Web Components

Bhagyashree R
16 Nov 2018
3 min read
Mozilla’s Firefox 63 comes with support for two Web Components: Custom Elements and Shadow DOM. Yesterday, Mozilla shared how these new capabilities and resources are helping web developers to create reusable and modular code. What are Web Components? Web components is a suite of web platform APIs that allow you to create new custom, reusable, and encapsulated HTML tags to use in web pages and web apps. Custom components and widgets built on the Web Component standards work across modern browsers and can be used with any JavaScript library or framework that works with HTML. Let’s discuss the two tent pole standards of Web Components v1: Custom Elements Custom Elements, as the name suggests, allows developers to create “customized” HTML tags. With Custom Elements, web developers can create new HTML tags, improve existing HTML tags, or extend the components created by other developers. It provides developers a web standards-based way to create reusable components using nothing more than vanilla JS/HTML/CSS. To prevent any future conflicts, all Custom Elements must contain a dash, for example, my-element. The following are the power Custom Elements provides: 1. Earlier, browsers didn’t allow extending the built-in HTMLElement class or its subclasses. You can now do that with Custom Elements. 2. For the existing tags such as a p tag, the browser is aware to map it with the HTMLParagraphElement class. But what happens in the case of Custom Elements? In addition to extending built-in classes, we now have a Custom Element Registry for declaring this mapping. It is the controller of custom elements on a web document, allowing you to register a custom element on the page, return information on what custom elements are registered, and so on. 3. Additional lifecycle callbacks such as connectedCallback, disconnectedCallback, and attributeChangeCallback are added for detecting element creation, insertion to the DOM, attribute changes, and more. Shadow DOM Shadow DOM gives you an elegant way to overlay the normal DOM subtree with a special document fragment that contains another subtree of nodes. It introduces a concept of shadow root. A shadow root has standard DOM methods, and can be appended to as any other DOM node but is rendered separately from a document's main DOM tree. Shadow DOM introduces scoped styles to the web platform. It allows you to bundle CSS with markup, hide implementation details, and author self-contained components in vanilla JavaScript without needing any tools or adhering to naming conventions. The underlying concept of Shadow DOM It is similar to the regular DOM, but differs in two ways: How it's created/used How it behaves in relation to the rest of the page Normally, DOM nodes are created and appended as children of another element. Using shadow DOM, you can create a scoped DOM tree that's attached to the element, but separate from its actual children. This scoped subtree is called a shadow tree. The element to which the shadow tree is attached to is called shadow host. Anything that is added in the shadows becomes local to the hosting element, including <style>. This is how CSS style scoping is achieved by the Shadow DOM. Read more in detail about Web Components on Mozilla’s website. Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs Mozilla shares how AV1, the new open source royalty-free video codec, works This fun Mozilla tool rates products on a ‘creepy meter’ to help you shop safely this holiday season
Read more
  • 0
  • 0
  • 3863

article-image-react-native-0-59-is-now-out-with-react-hooks-updated-javascriptcore-and-more
Bhagyashree R
13 Mar 2019
2 min read
Save for later

React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!

Bhagyashree R
13 Mar 2019
2 min read
After releasing the RC0 version of React Native 0.59, the team announced its stable release yesterday. This release comes with some of the most awaited features including React Hooks, updated JavaScriptCore, and more. Support for React Hooks React Hooks were introduced to solve a wide variety of problems in React. It enables you to reuse stateful logic across components without having to restructure your components hierarchy. With React Hooks, you can split a component into smaller functions, based on what pieces are related rather than forcing a split based on lifecycle methods. It also lets you use more of React’s features without classes. Updated JavaScriptCore The JavaScriptCore (JSC) is an engine that allows Android developers to use JavaScript natively in their apps. React Native 0.59 comes with an updated JSC for Android, and hence supports a lot of modern JavaScript features. These features include 64-bit support, JavaScript support, and big performance improvements. Improved app startup time with inline requires Applications now load resources as and when required to prevent slowing down the app launch. This feature is known as “inline requires”, which delay the requiring of a module or file until that module or file is actually needed. Using inline requires can result in startup time improvements. CLI improvements Earlier, React Native CLI improvements had long-standing issues and lacked official support. The CLI tools are now moved to a new repository and come with exciting improvements. Now, logs are formatted better and commands run almost instantly. Breaking changes React Native 0.59 has been cleaned up following Google's latest recommendations, which could result in potential breakage of existing apps. You might experience a runtime crash and see a message like this, “You need to use a Theme.AppCompat theme (or descendant) with this activity." Developers are recommended to update their project’s AndroidManifest.xml file to make sure that “android:theme” value is an AppCompat theme. Also, in this release, the “react-native-git-upgrade” command has been replaced with the newly improved “react-native upgrade” command. To read the official announcement, check out React Native’s website. React Native community announce March updates, post sharing the roadmap for Q4 React Native Vs Ionic: Which one is the better mobile app development framework? How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 3817
article-image-welcome-express-gateway-1-11-0-a-microservices-api-gateway-on-express-js
Bhagyashree R
24 Aug 2018
2 min read
Save for later

Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js

Bhagyashree R
24 Aug 2018
2 min read
Express Gateway 1.11.0 has been released after adding an important feature for the proxy policy and some bug fixes. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. What is new in this version? Additions New parameter called stripPath: Support for a new parameter called stripPath has been added to the Proxy Policy for Express Gateway. Its default value is false. You can now completely own both the URL space of your backend server as well the one exposed by Express Gateway. Official Helm chart: An official Helm chart has been added that enables you to install Express Gateway on your Rancher or Kubernetes Cluster with a single command. Bug Fixes The base condition schema is now correctly returned by the /schemas Admin API Endpoint so that the external clients can use it and resolve its references correctly. Previously, invalid configuration could be sent to the gateway through the Admin API when using Express Gateway in production. The gateway was correctly validating the gateway.config content, but it wasn't validating all the policies inside it. This bug fix was done to  make sure when an Admin API call that is modifying the configuration is done, the validation should be triggered so that we do not persist on disk a broken configuration file. Fixed a missing field in oauth2-introspect JSON Schema. For maintaining consistency, the keyauth schema name is now correctly named key-auth. Miscellaneous changes Unused migration framework has been removed. The X-Powered-By header is now disabled for security reasons. The way of starting Express Gateway in official Docker file is changed. Express Gateway is not wrapped in a bash command before being run. The reason is that the former command allocates an additional /bin/sh process, the latter does not. In this article we looked through some of the updates introduced in Express Gateway 1.11.0. To know more on this new update head over to their GitHub repo. API Gateway and its need Deploying Node.js apps on Google App Engine is now easy How to build Dockers with microservices
Read more
  • 0
  • 0
  • 3792

article-image-chromeos-is-ready-for-web-development-a-talk-by-dan-dascalescu-at-the-chrome-web-summit-2018
Sugandha Lahoti
15 Nov 2018
3 min read
Save for later

“ChromeOS is ready for web development” - A talk by Dan Dascalescu at the Chrome Web Summit 2018

Sugandha Lahoti
15 Nov 2018
3 min read
At the Chrome Web Summit 2018, Dan Dascalescu, Partner Developer Advocate at Google provided a high-level overview of ChromeOS and discussed Chrome’s core and new features available to web developers. Topics included best practices for web development, including Progressive Web Apps, and optimizing input and touch for tablets while having desktop users in mind. He specified that Chromebooks are convergence machines that run Linux, Android, and Google Play natively without emulation. He explained why ChromeOS can be a good choice for web developers. It not only powers devices from sticks to tablets to desktops, but it can also run web, Android, and now Linux applications. ChromeOS brings together your own development workflow with a variety of form factors from mobiles, tablets, desktop, and browsers on Android and Linux. Run Linux apps on ChromeOS with Crostini Stephen Barber, an engineer on ChromeOS described Chrome’s container architecture which is based on Chrome’s principle of safety, security, and reliability.  By using lightweight containers and hardware virtualization support, Android and Linux code run natively in ChromeOS. Developers can run Linux apps on ChromeOS through Project Crostini. Crostini is based on Debian stable and uses both virtualization and containers to provide security in depth. For now, they are starting out targeting web developers by providing integration features like port forwarding to localhost as a secure origin. They also provide a penguin.linux.test DNS alias, to treat a container like a separate system. For supporting more developer workflows than just web, they are soon providing USB, GPU, audio, FUSE, and file sharing support in upcoming releases. Dan also shared how Crostini is actually used for developing web apps. He demonstrated how you can easily install Linux on your Chromebook. Although Crostini is still in development, most things work as expected. Developers can run IDEs, databases like MongoDB, or MySQL. Anything can be installed with an -apt. It also has a terminal. Dan also mentioned Carlo, which is a Google project that is essentially a helpful node app framework. It provides applications with Chrome rendering capabilities. It uses a locally detected instance of chrome and it connects to your process pipe and then exposes the high-level API to render in Chrome from your NodeScript. If you don’t need low-level features, you can make your app as a PWA which works without a LaunchBar once installed in ChromeOS. Windows Chrome desktop PWA support will be available from Chrome 70+ and Mac from Chrome 72+. Dan also conducted a demo on how to run a PWA. These were the steps: Set up Crostini Install the development environment (node, npm, VSCode) Checkout a PWA (Squoosh) from GitHub Open in VSCode Run the web server Open PWA from Linux and Android browsers He also provided guidance on optimizing forms, handling touch interactions, pointer events, and how to set up remote debugging. What does the future look like for ChromeOS? Chrome team is on improving the desktop PWA support. This includes support for keyboard shortcuts, badging for the launch icon, and link capturing. They are also working on low-latency canvas contexts which are introduced in Chrome 71 Beta. This context uses OpenGLES for rastering, writes directly to the Front Buffer, which bypasses several steps of the rendering process but risks tearing. It is used mainly for high-level interactive apps. View the full talk on YouTube. Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications.
Read more
  • 0
  • 0
  • 3765
Modal Close icon
Modal Close icon