Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Server-Side Web Development

85 Articles
article-image-github-addresses-technical-debt-now-runs-on-rails-5-2-1
Bhagyashree R
01 Oct 2018
3 min read
Save for later

GitHub addresses technical debt, now runs on Rails 5.2.1

Bhagyashree R
01 Oct 2018
3 min read
Last week, GitHub announced that their main application is now running on Rails 5.2.1. Along with this upgrade, they have also improved the overall codebase and cleaned up technical debt. How GitHub upgraded to Rails 5.2.1? The upgrade started out as a hobby with no dedicated team assigned. As they made progress and gained traction it became a priority. They added the ability to dual boot the application in multiple versions of Rails, instead of using a long running branch to upgrade Rails. Two Gemfile.lock were created: Gemfile.lock for the current version Gemfile_next.lock for the next version This dual booting enabled the developers to regularly deploy changes for the next version to GitHub without any affect on how production works. This was done by conditionally loading the code: if GitHub.rails_3_2? ## 3.2 code (i.e. production a year and a half ago) elsif GitHub.rails_4_2? # 4.2 code else # all 5.0+ future code, ensuring we never accidentally # fall back into an old version going forward end To roll out the Rails upgrade they followed a careful and iterative process: The developers first deployed to their testing environment and requested volunteers from each team to click test their area of the codebase to find any regressions the test suite missed. These regressions were then fixed and deployment was done in off-hours to a percentage of production servers. During each deploy, data about exceptions and performance of the site was collected. With this information they fixed bugs that came up and repeat those steps until the error rate was low enough to be considered equal to the previous version. Finally, they merged the upgrade once they could deploy to full production for 30 minutes at peak traffic with no visible impact. This process allowed them to deploy 4.2 and 5.2 with minimal customer impact and no down time. Key lessons they learned during this upgradation Regular upgradation Upgrading will be easier if you are closer to a new version of Rails. This also encourages your team to fix bugs in Rails instead of monkey-patching the application. Keeping an upgrade infrastructure Needless to say, there will always be a new version to upgrade to. To keep up with the new versions, add a build to run against the master branch to catch bugs in Rails and in your application early. This make upgrades easier and increase your upstream contributions. Regularly address technical debt Technical debt refers to the additional rework you and your team have to do because of choosing an easy solution now instead of using a better approach that would take longer. Refraining from messing with a working code could cause a bottleneck for upgrades. To avoid this try to prevent coupling your application logic too closely to your framework. The line where your application logic ends and your framework begins should be clear. Be sure to assume that things will breaks Upgrading a large and trafficked application like GitHub is not easy. They did face issues with CI, local development, slow queries, and other problems that didn’t show up in their CI builds or click testing. Read the full announcement on GitHub Engineering blog. GitHub introduces ‘Experiments’, a platform to share live demos of their research projects GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 2455

article-image-can-an-open-web-index-break-googles-stranglehold-over-the-search-engine-market
Bhagyashree R
22 Apr 2019
4 min read
Save for later

Can an Open Web Index break Google’s stranglehold over the search engine market?

Bhagyashree R
22 Apr 2019
4 min read
Earlier this month, Dirk Lewandowski, Professor of Information Research & Information Retrieval  at Hamburg University of Applied Sciences, Germany, published a proposal for building an index of the Web. His proposal aims to separate the infrastructure part of search engine from the services part. Search engines are our way to the web, which makes them an integral part of the Web’s infrastructure. While there are a significant number of search engines present in the market, there are only a few relevant search engines that have their own index, for example, Google, Bing, Yandex and Baidu. Other search engines that pull results from these search engines, for instance, Yahoo, cannot really be considered search engines in the true sense. The US search engine market is split between Google and Bing with roughly two thirds to one-third, respectively, In most European countries, Google covers the 90% of the market share. Highlighting the implications of Google’s dominance in the current search engine market, the report reads, “As this situation has been stable over at least the last few years, there have been discussions about how much power Google has over what users get to see from the Web, as well as about anti-competitive business practices, most notably in the context of the European Commission's competitive investigation into the search giant.” The proposal aims to bring plurality in the search engine market, not only in terms of the numbers of search engine providers but also in the number of search results users get to see when using search engines. The idea is to implement the “missing part of the Web’s infrastructure” called searchable index. The author proposes to separate the infrastructure part of the search engine from services part. This will allow multitude of services, whether existing as search engines or otherwise to be run on a shared infrastructure. The following figure shows how the public infrastructure crawls the web for indexing its content and provides an interface to the services that are built on top of the index. The indexing stage is split into basic indexing and advanced indexing. Basic indexing is responsible for providing the data in a form that services built on top of the index can easily and rapidly process the data. Though services are allowed to do their further indexing to prepare the documents, the open infrastructure also provides some advanced indexing. This provides additional information to the indexed documents, for example, semantic annotations. This advanced indexing requires an extensive infrastructure for data mining and processing. Services will be able to decide for themselves to what extent they want to rely on the pre-processing infrastructure provided by the Open Web Index. A common design principle can be adopted is allowing services a maximum of flexibility. Credits: arXiv Many users are supporting this idea. One Redditor said, “I have been wanting this for years...If you look at the original Yahoo Page when Yahoo first started out it attempted to solve this problem.I believe this index could be regionally or language based.” Some others do believe that implementing an open web index will come with its own challenges. “One of the challenges of creating a "web index" is first creating indexes of each website. "Crawling" to discover every page of a website, as well as all links to external sites, is labour-intensive and relatively inefficient. Part of that is because there is no 100% reliable way to know, before we begin accessing a website, each and every URL for each and every page of the site. There are inconsistent efforts such "site index" pages or the "sitemap" protocol (introduced by Google), but we cannot rely on all websites to create a comprehensive list of pages and to share it,” adds another Redditor. To read more in detail, check out the paper titled: The Web is missing an essential part of infrastructure: an Open Web Index. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 2406

article-image-mastodon-2-7-a-decentralized-alternative-to-social-media-silos-is-now-out
Bhagyashree R
21 Jan 2019
2 min read
Save for later

Mastodon 2.7, a decentralized alternative to social media silos, is now out!

Bhagyashree R
21 Jan 2019
2 min read
Yesterday, the Mastodon team released Mastodon 2.7, which comes with major improvements to the admin interface, a new moderation warning system, and more. Mastodon is a free, open-source social network server, which is based on open web protocols like ActivityPub and OStatus. This server aims to provide users with a decentralized alternative to commercial social media silos and returns the control of the content distribution channels to the people. Profile directory The new profile directory allows users to see active posters on a given Mastodon server and filter them by the hashtags in their profile bio. With profile directory, users can find people with common interests without having to read through public timelines. A new moderation warning system This version comes with a new moderation warning system for Mastodon. Moderators can now inform users if their account is suspended or disabled. They can also send official warnings via e-mails, which are reflected in the moderator interface to keep other moderators up to date. Improvements in the administration interface Mastodon 2.7 combines administration interfaces for known servers and domain blocks into a common area. Users can see information like the number of accounts known from a particular server, the number of accounts followed from your server, the number of individuals blocked or reported, etc. A registration API A new registration API is introduced, which allows apps to directly accept new registration from their users, instead of having to send them to a web browser. Users still receive a confirmation e-mail when they sign up through the app, which contains an activation link that can open the app. New commands for managing a Mastodon server The tootctl command-line utility used for managing a Mastodon server has received two new commands: tootctl domains crawl: You can scan the Mastodon network to discover servers and aggregate statistics about Mastodon’s usage. tootctl accounts follow: You can make the users on your server follow a specified account. This command comes in handy in cases where an administrator needs to change their account. You can read the full list of improvements in Mastodon 2.7 on its website. How Dropbox uses automated data center operations to reduce server outage and downtime Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial] Fortnite server suffered a minor outage, Epic Games was quick to address the issue
Read more
  • 0
  • 0
  • 2385
Visually different images
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-v8-7-5-beta-is-now-out-with-webassembly-implicit-caching-bulk-memory-operations-and-more
Bhagyashree R
17 May 2019
3 min read
Save for later

V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more

Bhagyashree R
17 May 2019
3 min read
Yesterday, the team behind Google Chrome’s JavaScript and WebAssembly engine, V8 announced the release of V8 7.5 beta. As per V8’s release cycle, its stable version will release in coordination with Chrome 75 stable release, which is expected to come out early June. This release comes with WebAssembly implicit caching, bulk memory operations, JavaScript numeric separators for better readability, and more. Few updates in V8 7.5 Beta WebAssembly implicit caching The team is planning to introduce implicit caching of WebAssembly compilation artifacts in Chrome 75, which is similar to Chromium’s JavaScript code cache. Code caching is an important way of optimizing browsers, which reduces the start-up time of commonly visited web pages by caching the result of parsing and compilation. This essentially means that if a user visits the same web page a second time, the already-seen WebAssembly modules will not be compiled again, and will instead be loaded from the cache. WebAssembly bulk memory operations V8 7.5 will come with a few new WebAssembly instructions for updating large regions of memory or tables. The following are some of these instructions: memory.fill: It fills a memory region with a given byte. memory.copy: It copies data from a source memory region to a destination region, even if these regions overlap. table.copy: Similar to memory.copy, it copies from one region of a table to another, even if the regions are overlapping. JavaScript numeric separators for better readability The human eye finds it difficult to quickly parse a large numeric literal, especially when it contains long digit repetitions, for instance, 10000000. To improve the readability of long numeric literals, a new feature is added that allows using underscores as a separator creating a visual separation between groups of digits. This feature works with both integers and floating point. Streaming script source data directly from the network In previous Chrome versions, the script source data coming in from the network always had to first go to the Chrome main thread before it was forwarded to the streamer. This made the streaming parser to wait for data that has already arrived from the network but hadn’t been forwarded to the streaming task yet because it was blocked at the main thread. Starting from Chrome 75, V8 will be able to stream scripts directly from the network into the streaming parser, without waiting for the Chrome main thread. To know more, check out the official announcement on V8 Blog. Electron 5.0 ships with new versions of Chromium, V8, and Node.js Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more  
Read more
  • 0
  • 0
  • 2356

article-image-grpc-a-cncf-backed-javascript-library-and-an-alternative-to-the-rest-paradigm-is-now-generally-available
Bhagyashree R
25 Oct 2018
3 min read
Save for later

gRPC, a CNCF backed JavaScript library and an alternative to the REST paradigm, is now generally available

Bhagyashree R
25 Oct 2018
3 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) announced the general availability of gRPC-Web, which means that it is stable enough for production use. It is a JavaScript client library that allows web apps to directly communicate with backend gRPC services, without the need for an intermediate HTTP server. This serves as an alternative to the REST paradigm of web development. What is gRPC? Source: gRPC Initially developed at Google, gRPC is an open source remote procedure call (RPC) framework that can run in any environment. gRPC allows a client application to directly call methods on a server application on a different machine as if it was a local object. gRPC is based on the idea of defining a service, specifying the methods that can be called remotely with their parameter and return types. To handle the client calls the server then implements this interface and runs a gRPC server. On the client side, the client has a stub that provides the same methods as the server. One of the advantages of using gRPC is that gRPC clients and servers can be written in any of the languages supported by gRPC. So, for instance, you can easily create a gRPC server in Java with clients in Go, Python, or Ruby. How gRPC-Web works? With gRPC-Web, you can define a service “contract” between client web applications and backend gRPC servers using .proto definitions and auto-generate client JavaScript. Here is how gRPC-Web works: Define the gRPC service: The first step is to define the gRPC service. Similar to other gRPC services, gRPC-Web uses protocol buffers to define its RPC methods and their message request and response types. Run the server and proxy: You need to have a gRPC server that implements the service interface and a gateway proxy that allows the client to connect to the server. Writing the JavaScript client: After the server and gateway are up and running, you can start making gRPC calls from the browser. What are the advantages of using gRPC-Web? Using gRPC-Web eliminates some of the tasks from the development process: Creating custom JSON serialization and deserialization logic Wrangling HTTP status codes Content type negotiation The following are its advantages: End-to-end gRPC gRPC-Web allows you to officially remove the REST component from your stack and replace it with pure gRPC. Replacing REST with gRPC will help in scenarios where a client request goes to an HTTP server, which interacts with five backend gRPC services. Tighter coordination between frontend and backend teams As the entire RPC pipeline is defined using Protocol Buffers, you no longer need to have your “microservices teams” alongside your “client team.” The interaction between the client and the backend is just one more gRPC layer amongst others. Generate client libraries easily With gRPC-Web, the server that interacts with the “outside” world is now a gRPC server instead of an HTTP server. This means that all of your service’s client libraries can be gRPC libraries. If you need client libraries for Ruby, Python, Java, and 4 other languages, you no longer have to write HTTP clients for all of them. You can read CNCF’s official announcement on its website. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2244

article-image-netlify-raises-30-million-new-application-delivery-network-replace-servers
Savia Lobo
11 Oct 2018
3 min read
Save for later

Netlify raises $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management

Savia Lobo
11 Oct 2018
3 min read
On Tuesday, Netlify, a San Francisco based company announced that it has raised $30 million in a series B round of funding for a new platform named as ‘Application Delivery Network’ designed specifically to assist web developers in building newer applications. The funding was led by Kleiner Perkins’ Mamoon Hamid with Andreessen Horowitz and the founders of Slack, Yelp, GitHub and Figma participating. Founded in 2015, Netlify provides all-in-one workflow to build, deploy, and manage modern web projects. This new platform for the web, will enable all content and applications to be created directly on a global network, thus, bypassing the need to ever setup or manage servers. Vision behind the global ‘Application Delivery Network’ Netlify has assisted a lot of organizations to dump web servers with no requirement of infrastructure. It also replaced a need for CDN and thus a lot of servers. In order to implement the new architecture, Netlify provides developers with a git-centric workflow that supports APIs and microservices. Netlify’s Application Delivery Network removes the last dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. Mathias Biilmann, Netlify Founder and CEO, said that more amount of devices brings additional complications. He further adds, “Customers have come to us with AWS environments that have dozens or even hundreds of them for a single application. Our goal is to remove the requirement for those servers completely. We’re not trying to make managing infrastructure easy. We want to make it totally unnecessary.” Investor’s take Talking about the investment in Netlify, Mamoon Hamid, Managing Member and General Partner at the venture capital firm Kleiner Perkins, said, “In a sense, they are completely rethinking how the modern web works. But the response to what they are doing has been overwhelming. Most of the top projects in this developer space have already migrated their sites: React, Vue, Gatsby, Docker, and Kubernetes are all Netlify powered. The early traction really shows they hit a nerve with the developer community.” To top it up as an icing on the cake, Chris Coyier, CSS expert and co-founder of Codepen says, “This is where the web is going. Netlify is just bringing it to us all a lot faster. With all the innovation in the space, this is an exciting time to be a developer.” What users say about Netlify In a discussion thread on Hacker News, users absolutely love how Netlify provides a helping hand to all the web developers in their day-to-day web application based tasks. Some of the features mentioned by users include: Netlify provides users with forms, lambdas and very easy testing by just pushing to another git branch It provides the ability to publish using a simple `git push` and does all the rest of the work including assets minification and bundling. Netlify connects to GitHub and rebuilds your site automatically when a change is made in the master branch. Users just have to connect their GitHub account with their UI. To know more about this news in detail, read Netlify’s official announcement. How to build a real-time data pipeline for web developers – Part 1 [Tutorial] How to build a real-time data pipeline for web developers – Part 2 [Tutorial] Google wants web developers to embrace AMP. Great news for users, more work for developers
Read more
  • 0
  • 0
  • 2155
article-image-blazor-0-5-0-is-now-available
Natasha Mathur
28 Jul 2018
3 min read
Save for later

Blazor 0.5.0 is now available!

Natasha Mathur
28 Jul 2018
3 min read
Blazor 0.5.0 is here. Blazor is an experimental .NET client-side web framework that uses c# and HTML. It runs on a browser using WebAssembly mechanism. Here component logic and DOM interactions occur in the same process. The latest release includes features such as server-side Blazor, a new startup model, and early support for in-browser debugging among other updates. Let’s discuss the highlights of Blazor 0.5.0 release. Server-side Blazor Blazor 0.5.0 release makes it possible to adopt the out-of-process model for Blazor by stretching it over a network connection so that you can run Blazor on the server with ease. With Blazor 0.5.0 it is possible to run your Blazor components server-side on .NET Core. UI updates, event handling, and JavaScript interop calls get handled over a SignalR connection. You can also use JavaScript interop libraries while using server-side Blazor. New Startup Model Blazor 0.5.0 projects now make use of a new startup model, similar to the startup model in ASP.NET Core. To configure the services for your Blazor app, each Blazor project consists of a Startup class with a ConfigureServices method. There’s also a Configure method for configuring the root components of the application. Calling .NET from JavaScript There’s a new feature added in Blazor 0.5.0 which lets you call .NET instance methods from JavaScript. You can do it by passing the .NET instance to JavaScript and wrapping it in a DotNetObjectRef instance. The .NET instance then gets passed by reference to JavaScript. This allows you to invoke .NET instance methods on the instance by using the invokeMethod or invokeMethodAsync functions. Adding Blazor to HTML file In earlier releases, the project build had modified index.html in order to replace the blazor-boot script tag with a real script tag.  This made it difficult to use Blazor in arbitrary HTML files. This mechanism has now been replaced in Blazor 0.5.0. You can add a script tag for client-side projects that references the _framework/blazor.webassembly.js script (which is generated as part of the build). You can add the script reference _framework/blazor.server.js. for server-side projects. Support for in-browser debugging Blazor 0.5.0 provides very basic debugging support in Chrome for client-side Blazor apps that run on WebAssembly. Despite the debugging support being limited and unpolished, it does show the basic debugging infrastructure. For more info on Blazor 0.5.0 updates, check out the official release notes. Masonite 2.0 released, a Python web development framework Node 10.0.0 released, packed with exciting new features  
Read more
  • 0
  • 1
  • 2146

article-image-google-open-sources-its-robots-txt-parser-to-make-robots-exclusion-protocol-an-official-internet-standard
Bhagyashree R
02 Jul 2019
3 min read
Save for later

Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, Google announced that it has teamed up with the creator of Robots Exclusion Protocol (REP), Martijn Koster and other webmasters to make the 25 year old protocol an internet standard. The REP, better known as robots.txt, is now submitted to IETF (Internet Engineering Task Force). Google has also open sourced its robots.txt parser and matcher as a C++ library. https://twitter.com/googlewmc/status/1145634145261051906 REP was created back in 1994 by Martijn Koster, a software engineer who is known for his contribution in internet searching. Since its inception, it has been widely adopted by websites to indicate whether web crawlers and other automatic clients are allowed to access the site or not. When any automatic client wants to visit a website it first checks for robots.txt that shows something like this: User-agent: * Disallow: / The User-agent: * statement means that this applies to all robots and Disallow: / means that the robot is not allowed to visit any page of the site. Despite being used widely on the web, it is still not an internet standard. With no set in stone rules, developers have interpreted the “ambiguous de-facto protocol” differently over the years. Also, it has not been updated since its creation to address the modern corner cases. This proposed REP draft is a standardized and extended version of REP that gives publishers fine-grained controls to decide what they like to be crawled on their site and potentially shown to interested users. The following are some of the important updates in the proposed REP: It is no longer limited to HTTP and can be used by any URI-based transfer protocol, for instance, FTP or CoAP. Developers need to at least parse the first 500 kibibytes of a robots.txt. This will ensure that the connections are not open for too long to avoid any unnecessary strain on servers. It defines a new maximum caching time of 24 hours after which crawlers cannot use robots.txt. This allows website owners to update their robots.txt whenever they want and also avoid the overloading robots.txt requests by crawlers. It also defines a provision for cases when a previously accessible robots.txt file becomes inaccessible because of server failures. In such cases the disallowed pages will not be crawled for a reasonably long period of time. This updated REP standard is currently in its draft stage and Google is now seeking feedback from developers. It wrote, “we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.” To know more in detail check out the official announcement by Google. Also, check out the proposed REP draft. Do Google Ads secretly track Stack Overflow users? Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers
Read more
  • 0
  • 0
  • 1785

article-image-chromium-developers-to-introduce-a-never-slow-mode-which-sets-limits-on-resource-usage
Bhagyashree R
06 Feb 2019
2 min read
Save for later

Chromium developers to introduce a “Never-Slow Mode”, which sets limits on resource usage

Bhagyashree R
06 Feb 2019
2 min read
Today, Alex Russell, a Google software engineer, submitted a patch called ‘Never-Slow Mode’ for Chromium. With this patch, various limits will be enforced for per-interaction and resources to keep the main thread clean. Russell’s patch is very similar to a bug Craig Hockenberry, a Partner at The Iconfactory, reported for WebKit, last week. He suggested adding limits on how much JavaScript code a website can load to avoid resource abuse of user computers. Here are some of the changes that will be done under this patch: Large scripts will be blocked. document.write() will be turned off Client-Hints will be enabled pervasively Resources will be buffered without ‘Content-Lenght’ set Budgets will be re-set on the interaction Long script tasks, which take more than 200ms, will pause all page execution until the next interaction. Budgets will be set for certain resource types such as script, font, CSS, and images. These are the limits that have been suggested under this patch (all the sizes are in wired size): Source: Chromium Similar to Hockenberry’s suggestion, this patch did get both negative and positive feedback from developers. Some Hacker News users believe that this will prevent web bloat. A user commented, “It's probably in Google's interest to limit web bloat that degrades UX”. Another user said, “I imagine they’re trying to encourage code splitting.” According to another Hacker News user putting hard coded limits will probably not work, “Hardcoded limits are the first tool most people reach for, but they fall apart completely when you have multiple teams working on a product, and when real-world deadlines kick in. It's like the corporate IT approach to solving problems — people can't break things if you lock everything down. But you will make them miserable and stop them doing their job”. You can check out the patch submitted by Russell at Chromium Gerrit. Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart
Read more
  • 0
  • 0
  • 1724
article-image-rocket-0-4-released-with-typed-uris-agnostic-database-support-request-local-state-and-more
Amrata Joshi
10 Dec 2018
4 min read
Save for later

Rocket 0.4 released with typed URIs, agnostic database support, request-local state and more

Amrata Joshi
10 Dec 2018
4 min read
Last week, the team at Rocket released Rocket 0.4, a web framework for Rust which focuses on usability, security, and performance. With Rocket, it is possible to write secure web applications quickly and without sacrificing flexibility or type safety. Features of Rocket 0.4 Typed URIs Rocket 0.4 comes with uri! macro that helps in building URIs to route in the application in a robust, type-safe, and URI-safe manner. The type or route parameter that is mismatched are caught at compile-time. With the help of Rocket 0.4, changes to the route URIs get reflected in the generated URIs, automatically. ORM agnostic database support. Rocket 0.4 comes with a built-in, ORM-agnostic support for databases. It provides a procedural macro that helps in connecting Rocket application to databases through connection pools. Rocket 0.4 gets databases configured individually through configuration mechanisms like Rocket.toml file or environment variables. Request-local state Rocket 0.4 features request-local state which is local to a given request and is carried along with the request. It gets dropped once the request is completed. Whenever a request is available, a request-local state can be used. Request-local state is cached which lets the stored data to be reused. Request-local state is used for request guards which get invoked multiple times during routing and processing of a single request. Live template reloading In this version of Rocket, when an application is compiled in debug mode, templates automatically get reloaded on getting modified. To view the template changes, one has to simply refresh and there is no need of rebuilding the application. Major Improvements Rocket 0.4 introduces SpaceHelmet that provides a typed interface for HTTP security headers. This release features mountable static-file serving via StaticFiles. With this version of Rocket, cookies can automatically get tracked and propagated by client. This version also introduces revamped query string handling that allows any number of dynamic query segments. Rocket 0.4 comes with transforming data guards that transform incoming data before processing it via an implementation of the FromData::transform() method. This version comes with Template::custom() which helps in customizing template engines including registering filters and helpers. With this version, applications can be launched without a working directory. In Rocket 0.4, the log messages refer to routes by name. A default catcher for 504: Gateway Timeout has been added. All derives, macros, and attributes are individually documented in rocket_codegen. To retrieve paths relative to the configuration file, Rocket 0.4 has added Config::root_relative() The private cookies are now set to Http and are also given an expiration date of 1 week by default. What can be expected from Rocket 0.5? Support for Rust Rocket 0.5 will run and compile on stable versions of the Rust compiler. Asynchronous Request Handling Rocket 0.5 will feature the latest asynchronous version that will support asynchronous request handling. Multipart Form Support The lack of built-in multipart form support makes handling file uploads and other submissions difficult. With Rocket 0.5 it will be easy to handle multipart forms. Stronger CSRF and XSS Protection Rocket 0.5 will protect against CSRF using effective robust techniques. It will come with added support for automatic, browser-based XSS protection. Users have been giving some good feedback on the Rocket 0.4 release and are highly appreciative of the Rocket team for the efforts they have taken. A user commented on HackerNews, “This release isn't just about new features or rewrites. It's about passion for one's work. It's about grit. It's about uncompromising commitment to excellence.” Sergio Benitez, a computer science PhD student at Stanford, has been appreciated a lot for his efforts towards the development of Rocket 0.4. A HackerNews user commented, “While there will ever be only one Sergio in the world, there are many others in the broader Rust community who are signaling many of the same positive qualities.” This release has also been appreciated for its feel which is similar to Flask and for its capability of using code generation and function's type signatures to automatically check the incoming parameters. The fact that the next release will be asynchronous has created a lot of curiosity in the developer community and users are now looking forward to Rocket 0.5. Read more about Rocket 0.4 in the official release notes. Use Rust for web development [Tutorial] Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more
Read more
  • 0
  • 0
  • 1403