Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-anti-paywall-add-on-is-no-longer-available-on-the-mozilla-website
Sugandha Lahoti
03 Dec 2018
4 min read
Save for later

Anti-paywall add-on is no longer available on the Mozilla website

Sugandha Lahoti
03 Dec 2018
4 min read
Anti-paywall add-on has been deprecated from the Mozilla website. The author of that add-on, Florent Daigniere confirmed that it has been removed from both Chrome and Mozilla. “This was done because the add-on violated the Firefox Add-on Distribution Agreement and the Conditions of Use,” Daigniere wrote. “It appears to be designed and promoted to allow users to circumvent paywalls, which is illegal”. Last year, Daigniere released the anti-paywall browser extension that maximizes the chances of bypassing paywalls. On asking Mozilla about why this add-on was deprecated, he got the reply: "There are various laws in the US that prohibit tools for circumventing access controls like a paywall. Both Section 1201 of the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA) are examples. We are responding to a specific complaint that named multiple paywalls bypassing add-ons. It did not target only your add-on." This news was one of the top stories on Hacker News. People are largely opposing Mozilla’s move. “Making it harder to install addons (and breaking all the old ones) is one of the things contributing to Mozilla losing share to Chrome. People used to use Firefox over Chrome because of all the great addons, which they then broke, leaving users with less reason not to use Chrome.” “I used to default to Firefox for work. Then they killed the old addons, which broke a major part of my workflow (FireFTP's "open a file and as you edit it it automatically re-uploads" feature). So there was a lot less keeping me stuck to it. ” “This extension just seems to strip tracking data and pretend to be a Google bot. It baffles me that this is somehow concerning enough to be taken down. And anyway, isn't making exemptions for Google's robots sort-of against their policy?” Users offered advice and suggestions to Daigniere on how he can go about with the process. “I would consult with an attorney to determine legal options for an adequate defense and expected expenses. A consult is not a contract and you can change your mind if you are unwilling to take the risk with a lawsuit. I suspect the takedown notice is a DMCA takedown based upon a flawed assumption of the law. The hard part about this is arguing the technical merits of the case before non-technical people. While the takedown notice is probably in error they could still make a good argument around bypassing their security controls. You could appeal to the EFF or ACLU. If they are willing to take your case it will be pro bono.” “I'd just move on. To be honest sites with those types of paywalls should not be indexed. The loophole you are taking advantage of here is a bait and switch by these sites. They want the search traffic but don't want public access. Most of us have already adapted, however, and avoid these sites or pay for them. Your plugin title blatantly describes that you're avoiding paying for something they are charging for so even though it may not be illegal it's not something I'd waste energy fighting for.” “Rename the plugin and change the description. The message from Mozilla states that the problem is the intent of the plugin. The technological measures it actually takes are not illegal per sé, but are illegal when used to circumvent paywalls. If you present this as a plug-in that allows you to view websites as the Google bot views them, for educational and debugging purposes, there is no problem. You can give the fact that it won’t see the paywall as an example. It’s actually useful for that purpose: you are not lying. It’s just that most people will install the plugin for its ‘side effects’. Their use of it will still be illegal, but the intent will not be illegal.” Read more of this conversation on Hacker News. The State of Mozilla 2017 report focuses on internet health and user privacy Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules
Read more
  • 0
  • 0
  • 5618

article-image-the-state-of-mozilla-2017-report-focuses-on-internet-health-and-user-privacy
Prasad Ramesh
29 Nov 2018
4 min read
Save for later

The State of Mozilla 2017 report focuses on internet health and user privacy

Prasad Ramesh
29 Nov 2018
4 min read
The State of Mozilla 2017 report is out and contains information on areas where Mozilla has made an impact and its activities in 2017-18. We look at some of the important details from the report. Towards building a healthier internet In the last two years, there have been scandals and news around big tech companies relating to data misuse, privacy hindrances and more. Some of these include the Cambridge Analytica scandal, Google tracking, and many others. Public and political trust from large tech companies has eroded following the uncovering of how some of these companies operate and treat user data. The Mozilla report says that now the focus is on how to limit these tech platforms and encourage them to adopt data regulation protocols. Mozilla seeks to fill the void where there is a lack of people who can decide correctly towards building a better internet. The State of Mozilla 2017 report reads: “When the United States Federal Communications Commission attacks net neutrality or the Indian government undermines privacy with Aadhaar, we see people around the world—including hundreds of thousands of members of the Mozilla community—stand up and say, Things should not work this way.” Read also: Is Mozilla the most progressive tech organization on the planet right now? The Mozilla Foundation and the Mozilla Corporation Mozilla was founded in 1998 as an open source project back when open source was truly open source, free of things like the Commons Clause. Mozilla has two organizations. The Mozilla Foundation which supports emerging leaders and mobilizes citizens towards better health of the internet. Second, the Mozilla Corporation which is a wholly owned subsidiary of the former and creates Mozilla products and advances public policy. The Mozilla Foundation Mozilla invests in people and organizations with a common vision other than building products. Another part of the State of Mozilla 2017 reads: “Our core program areas work together to bring the most effective ideas forward, quickly and where they have the most impact. As a result of our work, internet users see a change in the products they use and the policies that govern them.” Every year Mozilla Foundation creates the open source Internet Health Report to shed light on what’s been happening on the internet, specifically on its wellbeing. Their research includes data from multiple sources on areas like privacy and security, open innovation, decentralization, web literacy, and digital inclusion. Per the health report, Mozilla spent close to a million in 2017 on their agenda-setting work. Mozilla has also mobilized conscious internet users with campaigns around net neutrality in the US, India’s Aadhaar biometric system, copyright reform in the EU, and more. Mozilla has also invested in connecting internet health leaders and worked on data and privacy issues across the globe. It also invested about $24M in 2017 in this work. The Mozilla Corporation Mozilla says that to take the charge in changing internet culture they need to do more than building products. Post Firefox Quantum’s success, their focus is to better enable people in taking control of their online life. Another part of the State of Mozilla 2017 report highlights their vision stating that “Over the coming years, we will become the leading provider of user agency and online privacy by developing long-term trusted relationships with "conscious choosers" with a focus on helping people navigate their connected lives.” Mozilla pulled its ads from Facebook after the Cambridge Analytica scandal After learning about the Cambridge Analytica incident and guided by the Mozilla Manifesto, they decided to pull their ads from Facebook. Their Manifesto says “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional,”. After sending a message with this action, Mozilla also launched Facebook Container. It is a version of multi-account containers that prevent Facebook from tracking its users when they are not on the platform. They say that everyone has a right to keep their private information private and control their own web experiences. You can view the full State of Mozilla 2017 report at the Mozilla website. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Is Mozilla the most progressive tech organization on the planet right now?
Read more
  • 0
  • 0
  • 2525

article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 7892
Banner background image

article-image-react-16-x-roadmap-released-with-expected-timeline-for-features-like-hooks-suspense-and-concurrent-rendering
Sugandha Lahoti
28 Nov 2018
3 min read
Save for later

React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering”

Sugandha Lahoti
28 Nov 2018
3 min read
Yesterday, the React team published a roadmap for React 16.x releases. They have split the rollout of new React features into different milestones. The team has made it clear that they have a single vision for how all of these features fit together, but are releasing each part as soon as it is ready, for users to start testing them sooner. The expected milestone React 16.6: Suspense for Code Splitting (already shipped) This new feature can “suspend” rendering while components are waiting for something, and display a loading indicator. It is a convenient programming model that provides better user experience in Concurrent Mode  In React 16.6, Suspense for code splitting supports only one use case: lazy loading components with React.lazy() and <React.Suspense>. React 16.7: React Hooks (~Q1 2019) React Hooks allows users access to features like state and lifecycle from function components. They also let developers reuse stateful logic between components without introducing extra nesting in a tree. Hooks are only available in the 16.7 alpha versions of React. Some of their API is expected to change in the final 16.7 version. Hooks class support might possibly move to a separate package, reducing the default bundle size of React, in future releases. React 16.8: Concurrent Mode (~Q2 2019) Concurrent Mode lets React apps be more responsive by rendering component trees without blocking the main thread. It is opt-in and allows React to interrupt a long-running render to handle a high-priority event. Concurrent Mode was previously referred to as “async mode”. A name change happened to highlight React’s ability to perform work on different priority levels. This sets it apart from other approaches to async rendering. As of now, the team doesn’t expect many bugs in Concurrent Mode, but states that components that produce warnings in <React.StrictMode> may not work correctly. They plan to publish more guidance about diagnosing and fixing issues as part of the 16.8 release documentation. React 16.9: Suspense for Data Fetching (~mid 2019) In the already shipped React 16.6, the only supported use case for Suspense is code splitting. In the future 16.9 release, React will officially support ways to use Suspense for data fetching. The team will provide a reference implementation of a basic “React Cache” that’s compatible with Suspense. Data fetching libraries like Apollo and Relay will be able to integrate with Suspense by following a simple specification. The team  expects this feature to be adopted incrementally, and through layers like Apollo or Relay rather than directly. They also plan to complete two more projects Modernizing React DOM and Suspense for Server Rendering  in 2019. As these projects require more exploration, they aren’t tied to a particular release as of now. For more information, visit the React blog. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more!
Read more
  • 0
  • 0
  • 5724

article-image-amazon-rolls-out-aws-amplify-console-a-deployment-and-hosting-service-for-mobile-web-apps-at-reinvent-2018
Amrata Joshi
27 Nov 2018
3 min read
Save for later

Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018

Amrata Joshi
27 Nov 2018
3 min read
On day 1 of AWS re:Invent 2018, the team at Amazon released AWS Amplify Console, a continuous deployment and hosting service for mobile web applications. The AWS Amplify Console helps in avoiding downtime during application deployment and simplifies the deployment of the application’s front end and backend. Features of AWS Amplify Console Simplified continuous workflows By connecting AWS Amplify Console to the code repository, the frontend and backend are deployed in a single workflow on every code commit. This lets the web application to get updated only after the deployment is successfully completed by eliminating inconsistencies between the application’s frontend and backend. Easy Access AWS Amplify Console makes the building, deploying, and hosting of mobile web applications easier. It also lets users access the features faster. Easy custom domain setup One can set up custom domains managed in Amazon Route 53 with a single click and also get a free HTTPS certificate. If one manages the domain in Amazon Route 53, the Amplify Console automatically connects the root, subdomains and branch subdomains. Globally available The apps are served via Amazon's reliable content delivery network with 144 points of presence globally. Atomic deployments In AWS Amplify Console, the atomic deployments eliminate the maintenance windows and the scenarios where files fail to upload properly. Password protection The Amplify Console comes with a password to protect the web app and one easily work on new features without making them publicly accessible. Branch deployments With Amplify Console, one can work on new features without impacting the production. Also, the users can create branch deployments linked to each feature branch. Other features   The Amplify Console automatically detects the front end build settings along with any backend functionality provisioned with the Amplify CLI when connected to a code repository. With AWS Amplify Console, users can easily manage the production and staging environments for front-end and backend by connecting new branches. With AWS Amplify Console, one get screenshots of the app, rendered on different mobile devices to highlight layout issues. Users can now set up rewrites and redirects to maintain SEO rankings. Users can build web apps with static and dynamic functionality. One can deploy SSGs (Service Selection Gateway) with free SSL on the AWS Amplify Console. Check out the official announcement to know more about AWS Amplify Console. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store
Read more
  • 0
  • 0
  • 3329

article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 17329
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-django-is-revamping-its-governance-model-plans-to-dissolve-django-core-team
Bhagyashree R
21 Nov 2018
4 min read
Save for later

Django is revamping its governance model, plans to dissolve Django Core team

Bhagyashree R
21 Nov 2018
4 min read
Yesterday, James Bennett, a software developer and an active contributor to the Django web framework issued the summary of a proposal on dissolving the Django Core team and revoking commit bits. Re-forming or reorganizing the Django core team has been a topic of discussion from the last couple of years, and this proposal aims to take this discussion to real action. What are the reasons behind the proposal of dissolving the Django Core team? Unable to bring in new contributors Django, the open source project has been facing some difficulty in recruiting and retaining contributors to keep the project alive. Typically, open source projects avoid this situation by having corporate sponsorship of contributions. Companies which rely on the software also have employees who are responsible to maintain it. This was true in the case of Django as well but it hasn’t really worked out as a long-term plan. As compared to the growth of this web framework, it has hardly been able to draw contributors from across its entire user base. The project has not been able to bring new committers at a sufficient rate to replace those who have become less active or even completely inactive. This essentially means that Django is dependent on the goodwill of the contributors who mostly don’t get paid to work on it and are very few in number. This poses a risk on the future of the Django web framework. Django Committer is seen as a high-prestige title Currently, the decisions are made by consensus, involving input from committers and non-committers on the django-developers list and the commits to the main Django repository are made by the Django Fellows. Even people who have commit bits of their own, and therefore have the right to just push their changes straight into Django, typically use pull requests and start a discussion. The actual governance rarely relies on the committers, but still, Django committer is seen as a high-prestige title, and committers are given a lot of respect by the wider community. This creates an impression among potential contributors that they’re not “good enough” to match up to those “awe-inspiring titanic beings”. What is this proposal about? Given the reasons above, this proposal is being made to dissolve the Django core team and also revoke the commit bits. Instead, this proposal will introduce two roles called Mergers and Releasers. Mergers would merge pull requests into Django and Releasers would package/publish releases. Rather than being all-powered decision-makers, these would be bureaucratic roles. The current set of Fellows will act as the initial set of Mergers, and something similar will happen for Releasers. As opposed to allowing the committers making decisions, governance would take place entirely in public, on the django-developers mailing list. But as a final tie-breaker, the technical board would be retained and would get some extra decision-making power. These powers will be mostly related to the selection of the Merger/Releaser roles and confirming that new versions of Django are ready for release. The technical board will be elected very less often than it currently is and the voting would also be open to public. The Django Software Foundation (DSF) will act as a neutral  administrator of the technical board elections. What are the goals this proposal aims to achieve? Mr. Bennett believes that eliminating the distinction between the committers and the “ordinary contributors” will open doors for more contributors: “Removing the distinction between godlike “committers” and plebeian ordinary contributors will, I hope, help to make the project feel more open to contributions from anyone, especially by making the act of committing code to Django into a bureaucratic task, and making all voices equal on the django-developers mailing list.” The technical board remains as a backstop for resolving dead-locked decisions. This proposal will provide additional authority to the board such as issuing the final go-ahead on releases. Retaining the technical board will ensure that Django is not going to descend into some sort of “chaotic mob rule”. Also, with this proposal the formal description of Django’s governance becomes much more in line with the reality of how the project actually works and has worked for the past several years. To know more in detail, read the post by James Bannett: Django Core no more. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Getting started with Django and Django REST frameworks to build a RESTful app
Read more
  • 0
  • 0
  • 2609

article-image-introducing-rex-js-v1-0-0-a-companion-library-for-regex-written-in-typescript
Prasad Ramesh
20 Nov 2018
2 min read
Save for later

Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript

Prasad Ramesh
20 Nov 2018
2 min read
ReX.js is a helper library written in TypeScript for writing Regular Expressions. Yesterday, ReX.js v1.0.0, the first major version was released. Being written in TypeScript, it provides great autocompletion and development experience across various modern code editors. One of the main advantages of using ReX.js is its ability to document every line of code without hassles. Anatomy of  ReX.js v1.0.0 ReX.js is structured as namespace consisting of the following modules: Matcher: It is the class used to construct and use matching expressions. Replacer: The Replacer class is used to construct and use replacement expressions. Operation: This class represents a basic operation that is applied to expressions constructors. Parser: The parser class used to parse and execute Regexps. It is used by Matcher and implements polyfills for named groups and partially for look behinds. ReXer: It is used to construct Regexps. The Matcher and Replacer classes inherit from ReXer. The GitHub page says that the Matcher and Replacer classes will be used more likely by developers. The other classes would more likely be used for extendability and advanced use cases. Advanced use of ReX.js v1.0.0 Beyond basic Regex operations, ReX.js also provides options for extending its functionality. Operations and channels Every method used in ReX.js is just adding a new Operation to ReXer. An Operation can then be stringified using its own stringify method. A concept of channels is introduced to construct linear Regexps from nested function expressions. A channel is simply an array of Operations. The channels themselves are stored as an array in ReXer. Snippets Snippets are available if you want to reuse any kind of Operation configuration. Snippets provide an option to assign the given config to a name for later reuse. Methods and extensions Methods are ways to reuse and apply custom operations while extensions are just arrays of methods. Installing ReX.js v1.0.0 ReX.js is available on NPM as a package. You can include it in your current project by using: npm install @areknawo/rex If you’re using Yarn, then use the following command: yarn add @areknawo/rex For more details and documentation, visit the ReX.js GitHub page. Manipulating text data using Python Regular Expressions (regex) Introducing Howler.js, a Javascript audio library with full cross-browser support low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2934

article-image-mozilla-v-fcc-mozilla-challenges-fccs-elimination-of-net-neutrality-protection-rules
Bhagyashree R
19 Nov 2018
4 min read
Save for later

Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules

Bhagyashree R
19 Nov 2018
4 min read
Last week, Mozilla announced that they along with other petitioners have filed their reply brief on the case: Mozilla v FCC. This case challenges FCC’s elimination of net neutrality protection rules. These rules were formulated for internet providers to treat all online traffic equally. What is net neutrality, anyway? It is a principle that asks internet service providers to treat all data on the internet equally. It treats the internet as a one-way lane allowing all data to flow at the same rate on the same path. But without net neutrality, the ISPs can create fast and slow lanes, decide to block sites, and charge companies more money to prioritize their content. FCC repealing the Open Internet Order A core issue to net neutrality was the confusion whether ISPs should be classified as Title I (information services) or Title II (common carrier services) services under the Communications Act of 1934. If ISPs are classified as Title II, FCC would have significant ability to regulate ISPs but would have little control over them if classified as Title I. In 2015, ISPs were reclassified as Title II services by FCC under the Open Internet Order, which gave them the authority to enforce net neutrality. This order banned the blocking and slowing of web content by internet providers, prohibited the practice of paid prioritization, and introduced a "general conduct" standard, which gave FCC the ability to investigate unethical broadband practices. In April 2017, Ajit Pai became the FCC chairman as part of the Trump Administration. He proposed to repeal the net neutrality policies and made ISPs as Title I services. When the draft of this repeal was published in May 2017 it got over 20 million comments to FCC. A majority of the people voted in favor of retaining the 2015 Open Internet Order, but FCC still repealed the order, which went in effect in June 2018. Mozilla v. FCC Mozilla alongside other companies, trade groups, states, and organizations filed a case (Mozilla v. FCC) against FCC in August this year to defend the net neutrality rules. Mozilla in its reply to this case said that rolling back the rules was totally an unethical move by FCC: “The FCC’s removal of net neutrality rules is not only bad for consumers, it is also unlawful. The protections in place were the product of years of deliberation and careful fact-finding that proved the need to protect consumers, who often have little or no choice of internet provider. The FCC is simply not permitted to arbitrarily change its mind about those protections based on little or no evidence.” This case advocates a consumer’s rights to access content and services online without any involvement of ISPs in blocking, throttling, or discriminating against consumers’ favorite services. Following are the arguments Mozilla is making against FCC’s decision of repealing the open internet rules: “The FCC order fundamentally mischaracterizes how internet access works. Whether based on semantic contortions or simply an inherent lack of understanding, the FCC asserts that ISPs simply don’t need to deliver websites you request without interference. The FCC completely renounces its enforcement ability and tries to delegate that authority to other agencies but only Congress can grant that authority, the FCC can’t decide it’s just not its job to regulate telecommunications services and promote competition. The FCC ignored the requirement to engage in a “reasoned decision making” process, ignoring much of the public record as well as their own data showing that consumers lack competitive choices for internet access, which gives ISPs the means to harm access to content and services online.” You can read more about the case Mozilla v. FCC. Read Mozilla’s reply to this case on its official website. US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study The U.S. Justice Department sues to block the new California Net Neutrality law
Read more
  • 0
  • 0
  • 2633

article-image-introducing-cycle-js-a-functional-and-reactive-javascript-framework
Bhagyashree R
19 Nov 2018
3 min read
Save for later

Introducing Cycle.js, a functional and reactive JavaScript framework

Bhagyashree R
19 Nov 2018
3 min read
Cycle.js is a functional and reactive JavaScript framework for writing predictable code. The apps built with Cycle.js consist of pure functions, which means it only takes inputs and generates predictable outputs, without performing any I/O effects. What is the basic concept behind Cycle.js? Cycle.js considers your application as a pure main() function. It takes inputs that are read effects (sources) from the external world and gives outputs that are write effects (sinks) to affect the external world. Drivers like plugins that handle DOM effects, HTTP effects, etc are responsible for managing these I/O effects in the external world. Source: Cycle.js The main() is built using Reactive Programming primitives that maximize separation of concerns and provides a fully declarative way of organizing your code. The dataflow in your app is clearly visible in the code, making it readable and traceable. Here are some of its properties: Functional and reactive As Cycle.js is functional and reactive, it allows developers to write predictable and separated code. Its building blocks are reactive streams from libraries like RxJS, xstream or Most.js. These greatly simplify code related to events, asynchrony, and errors. This application structure also separates concerns as all dynamic updates to a piece of data are co-located and impossible to change from outside. Simple and concise This framework is very easy to learn and get started with as it has very few concepts. Its core API has just one function, run(app, drivers). Apart from that, we have streams, functions, drivers, and a helper function to isolate scoped components. Its most of the building blocks are just JavaScript functions. Functional reactive streams are able to build complex dataflows with very few operations, which makes apps in Cycle.js very small and readable. Extensible and testable In Cycle.js, drivers are simple functions that take messages from sinks and call imperative functions. All I/O effects are done by the drivers, which means your application is just a pure function. This makes it very easy to swap the drivers around. Currently, there are drivers for React Native, HTML5 Notification, Socket.io, and so on. Also, with Cycle.js, testing is just a matter of feeding inputs and inspecting the output. Composable As mentioned earlier, a Cycle.js app, no matter how complex it is, is a function that can be reused in a larger Cycle.js app. Sources and sinks in these apps act as interfaces between the application and the drivers, but they are also the interface between a child component and its parent. Its components are not just GUI widgets like in other frameworks. You can make Web Audio components, network requests components, and others since the sources/sinks interface is not exclusive to the DOM. You can read more about Cycle.js on its official website. Introducing Howler.js, a Javascript audio library with full cross-browser support npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 4321
article-image-evan-you-shares-vue-3-0-updates-at-vueconf-toronto-2018
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Evan You shares Vue 3.0 updates at VueConf Toronto 2018

Bhagyashree R
16 Nov 2018
3 min read
VueConf Toronto 2018 commenced on November 14th. This a three-day event starting from November 14 to 16. One of the speakers at the event was Evan You, the creator of Vue.js who shared what to expect from the yet to be released Vue 3.0. https://twitter.com/Ionicframework/status/1063244741343629313 Following are some of the updates that were announced at the conference: Faster and maintainable code architecture Vue 3.0 is re-written from the ground up to make its architecture cleaner and more maintainable. To provide better speed some internal functionalities are broken into individual packages in order to isolate the scope of complexity. We can expect 100% faster mounting and patching with this release. Improved slots mechanism Now all compiler-generated slots are functions and invoked during the child component’s render call. The dependencies in slots are collected as dependencies of the child instead of the parent. When slot content changes, only the child is re-rendered. And if the parent re-renders, the child does not have to if its slot content did not change. This change prevents useless re-renders by offering even more precise change detection at the component tree level. Proxy-based observation mechanism Vue 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This will eliminate a number of limitations in the current implementation of Vue 2, which is based on Object.defineProperty: Detection of property addition / deletion Detection of Array index mutation / .length mutation Support for Map, Set, WeakMap and WeakSet Tree-shaking friendly The new codebase is tree-shaking friendly. Features such as built-in components and directive runtime helpers can be imported on-demand and tree-shakable. Tree-shakable features also allow the Vue developers to offer more built-in features in future without incurring payload penalties for users that don’t use them. Easily render-to-native with the Custom Renderer API Developers will be able to create custom renderers with the Custom Renderer API. They no longer need to fork the Vue codebase with custom modifications. This will allow easily keeping the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make it trivially easy to create custom renderers for various other purposes. In addition to these improvements, it will come with an experimental Hooks API, better warning traces, experimental time slicing support, supports IE11 and improved TypeScript with TSX. Read more about Vue 3.0 updates from the presentation shared by Evan You. Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Vue CLI 3.0 is here as the standard build toolchain behind Vue applications React vs. Vue: JavaScript framework wars
Read more
  • 0
  • 0
  • 2877

article-image-mozilla-shares-why-firefox-63-supports-web-components
Bhagyashree R
16 Nov 2018
3 min read
Save for later

Mozilla shares why Firefox 63 supports Web Components

Bhagyashree R
16 Nov 2018
3 min read
Mozilla’s Firefox 63 comes with support for two Web Components: Custom Elements and Shadow DOM. Yesterday, Mozilla shared how these new capabilities and resources are helping web developers to create reusable and modular code. What are Web Components? Web components is a suite of web platform APIs that allow you to create new custom, reusable, and encapsulated HTML tags to use in web pages and web apps. Custom components and widgets built on the Web Component standards work across modern browsers and can be used with any JavaScript library or framework that works with HTML. Let’s discuss the two tent pole standards of Web Components v1: Custom Elements Custom Elements, as the name suggests, allows developers to create “customized” HTML tags. With Custom Elements, web developers can create new HTML tags, improve existing HTML tags, or extend the components created by other developers. It provides developers a web standards-based way to create reusable components using nothing more than vanilla JS/HTML/CSS. To prevent any future conflicts, all Custom Elements must contain a dash, for example, my-element. The following are the power Custom Elements provides: 1. Earlier, browsers didn’t allow extending the built-in HTMLElement class or its subclasses. You can now do that with Custom Elements. 2. For the existing tags such as a p tag, the browser is aware to map it with the HTMLParagraphElement class. But what happens in the case of Custom Elements? In addition to extending built-in classes, we now have a Custom Element Registry for declaring this mapping. It is the controller of custom elements on a web document, allowing you to register a custom element on the page, return information on what custom elements are registered, and so on. 3. Additional lifecycle callbacks such as connectedCallback, disconnectedCallback, and attributeChangeCallback are added for detecting element creation, insertion to the DOM, attribute changes, and more. Shadow DOM Shadow DOM gives you an elegant way to overlay the normal DOM subtree with a special document fragment that contains another subtree of nodes. It introduces a concept of shadow root. A shadow root has standard DOM methods, and can be appended to as any other DOM node but is rendered separately from a document's main DOM tree. Shadow DOM introduces scoped styles to the web platform. It allows you to bundle CSS with markup, hide implementation details, and author self-contained components in vanilla JavaScript without needing any tools or adhering to naming conventions. The underlying concept of Shadow DOM It is similar to the regular DOM, but differs in two ways: How it's created/used How it behaves in relation to the rest of the page Normally, DOM nodes are created and appended as children of another element. Using shadow DOM, you can create a scoped DOM tree that's attached to the element, but separate from its actual children. This scoped subtree is called a shadow tree. The element to which the shadow tree is attached to is called shadow host. Anything that is added in the shadows becomes local to the hosting element, including <style>. This is how CSS style scoping is achieved by the Shadow DOM. Read more in detail about Web Components on Mozilla’s website. Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs Mozilla shares how AV1, the new open source royalty-free video codec, works This fun Mozilla tool rates products on a ‘creepy meter’ to help you shop safely this holiday season
Read more
  • 0
  • 0
  • 3863

article-image-node-v11-2-0-released-with-major-updates-in-timers-windows-http-parser-and-more
Amrata Joshi
16 Nov 2018
2 min read
Save for later

Node v11.2.0 released with major updates in timers, windows, HTTP parser and more

Amrata Joshi
16 Nov 2018
2 min read
Yesterday, the Node.js community released Node v11.2.0. This new version comes with a new experimental HTTP parser (llhttp), timers, windows and more. Node v11.1.0 was released earlier this month. Major updates Node v11.2.0 comes with a major update in timers, fixing an issue that could cause setTimeout to stop working as expected. If the node.pdb file is available, a crashing process will now show the names of stack frames This version improves the installer's new stage that installs native build tools. Node v11.2.0 adds prompt to tools installation script which gives a visible warning and a prompt that lessens the probability of users skipping ahead without reading. On Windows, the windowsHide option has been set to false. This will let the detached child processes and GUI apps to start in a new window. This version also introduced an experimental `llhttp` HTTP parser. llhttp is written in human-readable TypeScript. It is verifiable and easy to maintain. This llparser is used to generate the output C and/or bitcode artifacts, which can be compiled and linked with the embedder's program (like Node.js). The eventEmitter.emit() method has been added to v11.2.0. This method allows an arbitrary set of arguments to be passed to the listener functions. Improvements in Cluster The cluster module allows easy creation of child processes for sharing server ports. The cluster module now supports two methods of distributing incoming connections. The first one is the round robin approach which is default on all platforms except Windows. The master process listens on a port, they accept new connections and distribute them across the workers in a round-robin fashion. This approach avoids overloading a worker process. In the second process, the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly. Theoretically, the second approach gives the best performance. Read more about this release on the official page of Node.js. Node.js v10.12.0 (Current) released Node.js and JS Foundation announce intent to merge; developers have mixed feelings low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 2712
article-image-chromeos-is-ready-for-web-development-a-talk-by-dan-dascalescu-at-the-chrome-web-summit-2018
Sugandha Lahoti
15 Nov 2018
3 min read
Save for later

“ChromeOS is ready for web development” - A talk by Dan Dascalescu at the Chrome Web Summit 2018

Sugandha Lahoti
15 Nov 2018
3 min read
At the Chrome Web Summit 2018, Dan Dascalescu, Partner Developer Advocate at Google provided a high-level overview of ChromeOS and discussed Chrome’s core and new features available to web developers. Topics included best practices for web development, including Progressive Web Apps, and optimizing input and touch for tablets while having desktop users in mind. He specified that Chromebooks are convergence machines that run Linux, Android, and Google Play natively without emulation. He explained why ChromeOS can be a good choice for web developers. It not only powers devices from sticks to tablets to desktops, but it can also run web, Android, and now Linux applications. ChromeOS brings together your own development workflow with a variety of form factors from mobiles, tablets, desktop, and browsers on Android and Linux. Run Linux apps on ChromeOS with Crostini Stephen Barber, an engineer on ChromeOS described Chrome’s container architecture which is based on Chrome’s principle of safety, security, and reliability.  By using lightweight containers and hardware virtualization support, Android and Linux code run natively in ChromeOS. Developers can run Linux apps on ChromeOS through Project Crostini. Crostini is based on Debian stable and uses both virtualization and containers to provide security in depth. For now, they are starting out targeting web developers by providing integration features like port forwarding to localhost as a secure origin. They also provide a penguin.linux.test DNS alias, to treat a container like a separate system. For supporting more developer workflows than just web, they are soon providing USB, GPU, audio, FUSE, and file sharing support in upcoming releases. Dan also shared how Crostini is actually used for developing web apps. He demonstrated how you can easily install Linux on your Chromebook. Although Crostini is still in development, most things work as expected. Developers can run IDEs, databases like MongoDB, or MySQL. Anything can be installed with an -apt. It also has a terminal. Dan also mentioned Carlo, which is a Google project that is essentially a helpful node app framework. It provides applications with Chrome rendering capabilities. It uses a locally detected instance of chrome and it connects to your process pipe and then exposes the high-level API to render in Chrome from your NodeScript. If you don’t need low-level features, you can make your app as a PWA which works without a LaunchBar once installed in ChromeOS. Windows Chrome desktop PWA support will be available from Chrome 70+ and Mac from Chrome 72+. Dan also conducted a demo on how to run a PWA. These were the steps: Set up Crostini Install the development environment (node, npm, VSCode) Checkout a PWA (Squoosh) from GitHub Open in VSCode Run the web server Open PWA from Linux and Android browsers He also provided guidance on optimizing forms, handling touch interactions, pointer events, and how to set up remote debugging. What does the future look like for ChromeOS? Chrome team is on improving the desktop PWA support. This includes support for keyboard shortcuts, badging for the launch icon, and link capturing. They are also working on low-latency canvas contexts which are introduced in Chrome 71 Beta. This context uses OpenGLES for rastering, writes directly to the Front Buffer, which bypasses several steps of the rendering process but risks tearing. It is used mainly for high-level interactive apps. View the full talk on YouTube. Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications.
Read more
  • 0
  • 0
  • 3765

article-image-duckduckgo-chooses-to-improve-its-products-without-sacrificing-user-privacy
Amrata Joshi
14 Nov 2018
3 min read
Save for later

DuckDuckGo chooses to improve its products without sacrificing user privacy

Amrata Joshi
14 Nov 2018
3 min read
DuckDuckGo, an internet privacy company, empowers users to seamlessly take control of their personal information online, without any tradeoffs. DuckDuckGo doesn’t store IP addresses and doesn’t create unique cookies. It doesn’t even collect or share any type of personal information. The Improvements Lately, the company came up with some improvements. If you ever happen to search for  DuckDuckGo, you might have come across a "&atb=" URL parameter in the web address at the top of your browser. This parameter allows DuckDuckGo to anonymously  A/B (split) test product changes. To explain this further, let’s take, for example, users in the A group would get blue links and users in the B group would get red links. From this, it would be easier for the team at DuckuckGo to measure how the usage of DuckDuckGo has been impacted by different color links. The team at DuckDuckGo also measures the engagement of specific events on the page (e.g. A misspelling message is displayed, when it is clicked). It allows them to run experiments where they can test different misspelling messages and use CTR (click through rate) to determine the message's efficacy. The requests made for improving DuckDuckGo are anonymous and the information is used only for improving the products. Similar "atb.js" or "exti" requests are made by browser extensions and mobile apps. The browser extensions and mobile apps will only send one type of these requests a day. This means an approximate count of the devices which accessed DuckDuckGo can be known. But this would be done without knowing anything about those devices or the searches made by users. These requests are all fully encrypted, such that nobody else can see them except for DuckDuckGo. There is no personal information attached to the request. So, DuckDuckGo cannot ever tell what individual people are doing since everyone is anonymous. The team has developed systems from scratch, instead of relying on third-party services. This is how they stick to their privacy promise of not collecting and leaking any personal information. This move by the company centered around anonymity might benefit a lot to the company, as data breach incidents on various organizations are trending lately.  With the daily searches crossing the 30 million mark, the company has already experienced 50% growth in the last year. These improvements prove to be cherry on the cake! Could DDG possibly pose a real threat to the leading search engine, Google? Read more about this news on the official website of DuckDuckGo. 10 great tools to stay completely anonymous online Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers Google launches a Dataset Search Engine for finding Datasets on the Internet
Read more
  • 0
  • 0
  • 3252