Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-snyks-javascript-frameworks-security-report-2019-shares-the-state-of-security-for-react-angular-and-other-frontend-projects
Bhagyashree R
04 Nov 2019
6 min read
Save for later

Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects

Bhagyashree R
04 Nov 2019
6 min read
Last week, Snyk, an open-source security platform published the State of JavaScript frameworks security report 2019. This report mainly focuses on security vulnerabilities and risks in React and Angular ecosystems. It further talks about security practices in other common JavaScript frontend ecosystem projects including Vue.js, Bootstrap, and JQuery. https://twitter.com/snyksec/status/1189527376197246977 Key takeaways from the State of JavaScript frameworks security report Security vulnerabilities in core Angular and React projects In the report, the ‘react’, ‘react-dom’, and ‘prop-types’ libraries were considered as the core modules of React since they often form the foundation for React web applications. Snyk’s research team was able to find three cross-site scripting (XSS) vulnerabilities in total: two in ‘react’ and one in ‘react-dom’. The two vulnerabilities in the ‘react’ library were present in its pretty older versions, 0.5.x versions and the versions prior to 0.14. However, the vulnerability in react-dom was found in a recent release, version 16.x. Its occurrence depends on other pre-conditions as well, such as using the library within a server-rendering context. All these vulnerabilities’ Common Vulnerability Scoring System (CVSS) score ranged 6.5 and 7.1, which basically means that they were all medium to high severity vulnerabilities. Coming to Angular, Snyk found 19 vulnerabilities across six different release branches of Angular 1.x or AngularJS, which is no longer maintained. Angular 1.5 has the highest number of vulnerabilities, with seven vulnerabilities in total. Out of those seven, three had high severity and four had medium severity. The good thing is that with every new version, the vulnerabilities have decreased both in terms of severity and count. Security risks of indirect dependencies React and Angular projects are often generated with a scaffolding tool that provides a boilerplate. While in React, we use the ‘create-react-app’ npm package, in Angular we use the ‘@angular/cli’ npm package. In a sample React and Angular project created using these scaffolding tools, it was found that both included development dependencies with vulnerabilities. The good thing is that neither of them had any production dependency security issues. “It’s worthy to note that Angular relies on 952 dependencies, which contain a total of two vulnerabilities; React relies on 1257 dependencies, containing three vulnerabilities and one potential license compatibility issue,”  the report states. Here’s the list of security vulnerabilities that were found in these sample projects: Source: Snyk Security vulnerabilities in the Angular module ecosystem For the purposes of this study, the Snyk research team divided the Angular ecosystem into three areas: Angular ecosystem modules, malicious versions of modules, developer tooling. The Angular module ecosystem has the following vulnerable modules: Source: Snyk Talking about the malicious versions of modules, the report lists three malicious versions for the ‘angular-bmap’, ‘ng-ui-library’, ‘ngx-pica’ modules. The ‘angular-bmap’ 0.0.9 version included a malicious code that collected sensitive information related to password and credit cards from forms. It then used to send this information to an attacker-controlled URL. Thankfully, this version is now taken down from the npm registry. The ‘ng-ui-library’ 1.0.987 has the same malicious code as  ‘angular-bmap’ 0.0.9, despite that it is still maintained. The third module, 'ngx-pica' (from versions 1.1.4 to 1.1.6) also has the same malicious code as the above two modules. In developer tooling, the module ‘angular-http-server’ was found vulnerable to directory traversal twice. Security vulnerabilities in the React module ecosystem In React’s case, Snyk found four malicious packages namely ‘react-datepicker-plus’, ‘react-dates-sc’, ‘awesome_react_utility’, and ‘reactserver-native’. These contain malicious code that harvests credit card and other sensitive information and attacks compromised modules on the React ecosystem. Notable vulnerable modules that were found in React’s ecosystem during this study: The ‘react-marked-markdown’ module has a high-severity XSS vulnerability that does not have any fix available as of now. The ‘preact-render-to-string’ library is vulnerable to XSS in all versions prior to 3.7.2. The ‘react-tooltip’ library is vulnerable to XSS attacks for all versions prior to 3.8.1. The ‘react-svg’ library has a high severity XSS vulnerability that was disclosed by security researcher Ron Perris affecting all versions prior to 2.2.18. The 'mui-datatables' library has the CSV Injection vulnerability. “When we track all the vulnerable React modules we found, we count eight security vulnerabilities over the last three years with two in 2017, six in 2018 and two up until August 2019. This calls for responsible usage of open source and making sure you find and fix vulnerabilities as quickly as possible,” the report suggests. Along with listing the security vulnerabilities in React and Angular, the report also shares the overall security posture of the two. This includes secure coding conventions, built-in secure capabilities, responsible disclosure policies, and dedicated security documentation for the project. Vue.js security In total, four vulnerabilities were detected in the Vue.js core project spanning from December 2017 to August 2018: three medium and one low regular expressions denial of service vulnerability. As for Vue’s module ecosystem, the report lists the following vulnerable modules: The ‘bootstrap-vue’ library has a high severity XSS vulnerability that was disclosed in January 2019 and affects all versions prior to <2.0.0-rc.12. The ‘vue-backbone’ library had a malicious version published. Bootstrap security The Snyk research team was able to track a total of seven XSS vulnerabilities in Bootstrap. Out of those seven, three were disclosed in 2019 for recent Bootstrap v3 versions and three security vulnerabilities were disclosed in 2018, one of which affects the newer 4.x Bootstrap release. All these vulnerabilities have security fixes and an upgrade path for users to remediate the risks. Among the vulnerable modules in the Bootstrap ecosystem are: The ‘bootstrap-markdown’ library that includes an unfixed XSS vulnerability affecting all versions. The ‘bootstrap-vuejs’ library has a high severity XSS vulnerability that affects all versions prior to bootstrap-vue 2.0.0-rc.12. The ‘bootstrap-select’ library also includes a high severity XSS vulnerability. This article touched upon some of the key findings of the report. Check out the full report by Snyk to know more in detail. React Native 0.61 introduces Fast Refresh for reliable hot reloading React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API
Read more
  • 0
  • 0
  • 6199

article-image-tensorflow-js-architecture-and-applications
Bhagyashree R
05 Feb 2019
4 min read
Save for later

TensorFlow.js: Architecture and applications

Bhagyashree R
05 Feb 2019
4 min read
In a paper published last month, Google developers explained the design, API, and implementation of TensorFlow.js, the JavaScript implementation of TensorFlow. TensorFlow.js was first introduced at the TensorFlow Dev Summit 2018. It is basically the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. Google’s motivation behind creating TensorFlow.js was to bring machine learning in the hands of web developers who generally do not have much experience with machine learning. It also aims at allowing experienced ML users and teaching enthusiasts to easily migrate their work to JS. The TensorFlow.js architecture TensorFlow.js, as the name suggests, is based on TensorFlow, with a few exceptions specific to the JS environment. This library comes with the following two sets of APIs: The Ops API facilitates lower-level linear algebra operations such as matrix, multiplication, tensor addition, and so on. The Layers API, similar to the Keras API, provide developers high-level model building blocks and best practices with emphasis on neural networks. Source: TensorFlow.js TensorFlow.js backends In order to support device-specific kernel implementations, TensorFlow.js has a concept of backends. Currently it supports three backends: the browser, WebGL, and Node.js. The two new rising web standards, WebAssembly and WebGPU, will also be supported as a backend by TensorFlow.js in the future. To utilize the GPU for fast parallelized computations, TensorFlow.js relies on WebGL, a cross-platform web standard that provides low-level 3D graphics APIs. Among the three TensorFlow.js backends, the WebGL backend has the highest complexity. With the introduction of Node.js and event-driven programming, the use of JS in server-side applications has grown over time. Server-side JS has full access to the filesystem, native operating system kernel, and existing C and C++ libraries. In order to support the server-side use cases of machine learning in JavaScript, TensorFlow.js comes with a Node.js backend that binds to the official TensorFlow C API using the N-API. As a fallback, TensorFlow.js provides a slower CPU implementation in plain JS. This fallback can run in any execution environment and is automatically used when the environment has no access to WebGL or the TensorFlow binary. Current applications of TensorFlow.js Since its launch, TensorFlow.js have seen its applications in various domains. Here are some of the interesting examples the paper lists: Gestural Interfaces TensorFlow.js is being used in applications that take gestural inputs with the help of webcam. Developers are using this library to build applications that translate sign language to speech translation, enable individuals with limited motor ability control a web browser with their face, and perform real-time facial recognition and pose-detection. Research dissemination The library has facilitated ML researchers to make their algorithms more accessible to others. For instance, the Magenta.js library, developed by the Magenta team, provides in-browser access to generative music models. Porting to the web with TensorFlow.js has increased the visibility of their work with their audience, namely musicians. Desktop and production applications In addition to web development, JavaScript has been used to develop desktop and production applications. Node Clinic, an open source performance profiling tool, recently integrated a TensorFlow.js model to separate CPU usage spikes by the user from those caused by Node.js internals. Another example is, Mood.gg Desktop, which is a desktop application powered by Electron, a popular JavaScript framework for writing cross-platform desktop apps. With the help of TensorFlow.js, Mood.gg detects which character the user is playing in the game called Overwatch, by looking at the user’s screen. It then plays a custom soundtrack from a music streaming site that matches with the playing style of the in-game character. Read the paper, Tensorflow.js: Machine Learning for the Web and Beyond, for more details. TensorFlow.js 0.11.1 releases! Emoji Scavenger Hunt showcases TensorFlow.js 16 JavaScript frameworks developers should learn in 2019
Read more
  • 0
  • 0
  • 6184

article-image-introducing-howler-js-a-javascript-audio-library-with-full-cross-browser-support
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Introducing Howler.js, a Javascript audio library with full cross-browser support

Bhagyashree R
01 Nov 2018
2 min read
Developed by GoldFire Studios, Howler.js is an audio library for the modern web that makes working with audio in JavaScript easy and reliable across all platforms. It defaults to Web Audio API and falls back to HTML5 Audio to provide support for all browsers and platforms including IE9 and Cordova. Originally, it was developed for an HTML5 game engine, but it can also be used just as well for any other audio related function in web applications. Features of Howler.js Single API for all audio needs: It provides a simple and consistent API to make it easier to build audio experiences in your application. Audio sprites: For more precise playback and lower resources. you can define and control segments of files with audio sprites. Supports all codecs: It supports all codecs such as MP3, MPEG, OPUS, OGG, OGA, WAV, AAC, CAF, M4A, MP4, WEBA, WEBM, DOLBY, FLAC. Auto-caching for improved performance: It automatically caches loaded sounds that can be reused on subsequent calls for better performance and bandwidth. Modular architecture: Its modular architecture helps you to easily use and extend the library to add custom features. Which browsers does it support? Howler.js is compatible with the following: Google Chrome 7.0+ Internet Explorer 9.0+ Firefox 4.0+ Safari 5.1.4+ Mobile Safari 6.0+ Opera 12.0+ Microsoft Edge Read more about Howler.js on its official website and also check out its GitHub repository. npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 6181

article-image-plotly-releases-dash-daq-a-ui-component-library-for-data-acquisition-in-python
Natasha Mathur
02 Aug 2018
2 min read
Save for later

Plotly releases Dash DAQ: a UI component library for data acquisition in Python

Natasha Mathur
02 Aug 2018
2 min read
Plotly released Dash DAQ, a modern UI component library, which helps with data acquisition in Python, earlier this week. A data acquisition system (DAQ) helps collect, store, and distribute information. Dash DAQ is built on top of Plotly’s Dash (a Python framework used for building analytical web applications without requiring the use of JavaScript). Dash DAQ consists of 16 components. These components are used for building user interfaces that are capable of controlling and reading scientific instruments. To know more about each of their usage and configuration options, check out the official Dash DAQ components page. You can use Dash DAQ with Python drivers which are provided by instrument vendors. Alternatively, you can also write your own drivers with PySerial, PyUSB, or PyVISA. Dash DAQ is priced at $1980 as it is built with research labs in mind and is not suited currently for general python users. To install Dash DAQ, you have to purchase it first. After you make the purchase, a download page will automatically appear via which you can download it. Only one Dash DAQ library is allotted per developer. Here are the installation steps as mentioned in the official Dash DAQ installation page. Multiple apps of different variety have already been made using Dash DAQ. Here are some of the examples: Wireless Arduino Robot in Python, an app that wirelessly controls Sparki, an Arduino-based robot. Dash DAQ. Using Dash DAQ  for this app gives it clean, intuitive and virtual controls to build GUIs for your hardware. Robotic Arm in Python, an app that allows you to operate Robotic Arm Edge. Dash DAQ’s GUI components allow you to interface with all the robot’s motors and LED. Users can even do it via their mobile device, thereby enjoying the experience of a real remote control! Ocean Optics Spectrometer in Python, an app which allows users to interface with an Ocean Optics spectrometer. Here Dash DAQ offers interactive UI components which are written in Python allowing you to read and control the instrument in real-time. Apart from these few examples, there are a lot more applications that the developers at Plotly have built using Dash DAQ. plotly.py 3.0 releases 15 Useful Python Libraries to make your Data Science tasks Easier  
Read more
  • 0
  • 0
  • 6169

article-image-java-11-is-here-with-tls-1-3-unicode-11-and-more-updates
Prasad Ramesh
26 Sep 2018
3 min read
Save for later

Java 11 is here with TLS 1.3, Unicode 11, and more updates

Prasad Ramesh
26 Sep 2018
3 min read
After the first release candidate last month, Java 11 is now generally available. The GA version is the first release with long-term support (LTS). Some of the new features include nest-based access control, a new garbage collector, support for Unicode 11 and TLS 1.3. New features in Java 11 Some of the new features in Java 11 include nest-based access control, dynamic class-file constants, a no-op garbage collector called Epsilon and more. Let’s look at these features in detail. Nest-based access control ‘Nests’ are introduced as an access control context that aligns with the existing nested types in Java. Classes that are logically part of the same code but are compiled to distinct files can access private members with nests. It eliminates the need for compilers to insert bridge methods. Two members in a nest are described as ‘nestmates’. Nests do not apply to large scales of access control like modules. Dynamic class-file constants The existing Java class-file format is extended to support a new constant-pool form called CONSTANT_Dynamic. Loading this new form will delegate its creation to a bootstrap method in the same way linking an invokedynamic call site delegates linkage to a bootstrap method. The aim is to reduce the cost and disruption of creating new forms of materializable class-file constants giving broader options to language designers and compiler implementors. Epsilon, a no-op garbage collector Epsilon is a new experimental garbage collector in Java 11 that handles memory allocation but does not actually reclaim any memory. It works by implementing linear allocation in a single contiguous chunk memory. The JVM will shut down when the available Java heap is exhausted. Added support for Unicode 11 Java 11 brings Unicode 11 support to existing platform APIs. The following Java classes are mainly supported with Unicode 10: In the java.lang package: Character and String In the java.awt.font package: NumericShaper In the java.text package: Bidi, BreakIterator, and Normalizer This upgrade includes Unicode 9 changes and adds a total of 16,018 characters and ten new scripts. Flight recorder The flight recorder in Java 11 is a data collection framework for troubleshooting Java applications and the HotSpot JVM. It has a low overhead. TLS  1.3 TLS 1.3 was recently standardized and is the latest version of the Transport Layer Security protocol. TLS 1.3 is not directly compatible with the previous versions. The goal here is not to support every feature of TLS 1.3. Features deprecated Some of the features are also removed from Java 11. Applications depending on Java EE and COBRA modules need to explicitly call these modules. The Nashorn JavaScript Engine, Pack200 Tools and API have all been deprecated. For a complete list of features and deprecations, visit the JDK website. Oracle releases open source and commercial licenses for Java 11 and later JEP 325: Revamped switch statements that can also be expressions proposed for Java 12 No more free Java SE 8 updates for commercial use after January 2019
Read more
  • 0
  • 0
  • 6168

article-image-fuchsias-xi-editor-is-no-longer-a-google-project
Sugandha Lahoti
05 Oct 2018
3 min read
Save for later

Fuchsia’s Xi editor is no longer a Google project

Sugandha Lahoti
05 Oct 2018
3 min read
Raph Levien, an ex-software engineer at Google’s Fuchsia project, announced yesterday that Fuchsia’s Xi editor is no longer a Google-owned project. It is now being hosted in its own GitHub organization. Xi editor was a Google project to create a performant text editor. All editing operations are asynchronous, so the UI is responsive even when editing huge documents. It is used as the basis for text editing services in the Fuchsia operating system. Raph had started working on the Xi editor at his time in Google, where he had been working for 11 years. The Xi editor is thoroughly async with a loosely coupled design which promises performance and rich extensibility. It’s main aim as stated in the project’s abstract is to “push the state of computer science for text handling and build an open-source community for teaching and learning, and working together to create a joyful editing experience.” After his departure from Google in August, he gave an update yesterday on how that will affect Xi’s development going forward. Per his blog, Xi’s core and its Windows and Mac client projects are now under their own xi-editor organization. They had previously been hosted on Google’s GitHub organization where it was a part of Google’s Contributor License Agreement. As per this agreement, Google can use and distribute a developer’s code without taking away its ownership. The new Xi editor project, licensed under the Apache 2 license, also has a new set of contributor guidelines, which explains in more detail what their process will be going forward. Raph stated in his blog, that since he will be busy with creating a music synthesis game, he is inviting contributors to “help share the load, reviewing each other’s code, discussing desired features and implementation strategies for them, and then assigning issues to me when they need my review.” He further adds, “I’m hopeful this will grow a scalable and sustainable structure for the community.” Regarding the state of the Fuchsia front-end, he mentions that it’s still early days for Fuchsia and the platform is not really ready for end-user software or self-hosted development. “ I’m hopeful it will get there in time and feel that xi-editor will be a great fit for it at that time. I look forward to continuing to collaborate with the Fuchsia team and others within Google.” Read Raph’s announcement on his blog. Google Fuchsia: What’s all the fuss about? Is Google planning to replace Android with Project Fuchsia? Google’s Smart Display – A push towards the new OS, Fuchsia
Read more
  • 0
  • 0
  • 6164
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-facebooks-glow-a-machine-learning-compiler-to-be-supported-by-intel-qualcomm-and-others
Bhagyashree R
14 Sep 2018
3 min read
Save for later

Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others

Bhagyashree R
14 Sep 2018
3 min read
Yesterday, Facebook announced that Cadence, Esperanto, Intel, Marvell, and Qualcomm Technologies Inc, have committed to support their Glow compiler in future silicon products. Facebook, with this partnership aims to build a hardware ecosystem for machine learning. With Glow, their partners will be able to rapidly design and optimize new silicon products for AI and ML and help Facebook scale their platform. They are also planning to expand this ecosystem by adding more partners in 2018. What is Glow? Glow is a machine learning compiler which is used to speed up the performance of deep learning frameworks on different hardware platforms. The name “Glow” comes from Graph-Lowering, which is the main method that the compiler uses for generating efficient code. This compiler is designed to allow state-of-the-art compiler optimizations and code generation of neural network graphs. With Glow, hardware developers and researchers can focus on building next generation hardware accelerators that can be supported by deep learning frameworks like PyTorch. Hardware accelerators for ML solve a range of distinct problems. Some focus on inference, while others focus on training. How it works? Glow accepts a computation graph from deep learning frameworks such as, PyTorch and TensorFlow and generates highly optimized code for machine learning accelerators. To do so, it lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation: Source: Facebook High-level intermediate representation allows the optimizer to perform domain-specific optimizations. Lower-level intermediate representation, an instruction-based address-only representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation, and copy elimination. The optimizer then performs machine-specific code generation to take advantage of specialized hardware features. Glow supports a high number of input operators as well as a large number of hardware targets with the help of its lowering phase, which eliminates the need to implement all operators on all targets. The lowering phase reduces the input space and allows new hardware backends to focus on a small number of linear algebra primitives. You can read more about Facebook’s goals for Glow in its official announcement. If you are interesting in knowing how it works in more detail, check out this research paper and also its GitHub repository. Facebook launches LogDevice: An open source distributed data store designed for logs Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN
Read more
  • 0
  • 0
  • 6141

article-image-introducing-script-8-an-8-bit-javascript-based-fantasy-computer-to-make-retro-looking-games
Bhagyashree R
28 Jan 2019
2 min read
Save for later

Introducing SCRIPT-8, an 8-bit JavaScript-based fantasy computer to make retro-looking games

Bhagyashree R
28 Jan 2019
2 min read
Adding to the list of several fantasy consoles/computers is the newly-introduced SCRIPT-8, written by Gabriel Florit, a graphics reporter at the Washington Post who also likes working with augmented reality. https://twitter.com/gabrielflorit/status/986716413254610944 SCRIPT-8 is a JavaScript-based fantasy computer to make, play, and, share tiny retro-looking games. Based on Bret Victor’s Inventing on principle and Learnable programming, it provides programmers live-coding experience, which means the program’s output updates as they code. The games built using SCRIPT-8 are called cassettes. These cassettes are recorded at a URL which you can share with anyone and play with a keyboard or gamepad. You can also make your own version of an existing cassette by changing its code, art, or music, and record it to a different cassette. What are SCRIPT-8’s features? A code editor which provides you with immediate feedback. A slider using which you can easily update numbers without typing. A time-traveling tool for pausing and rewinding the game. You can see a character’s past and future paths with provided buttons. A sprite editor where the changes are reflected in the game instantly. A map editor to create new paths. A music editor using which you can create phrases, group them into chains, and turn those into songs. You can read more about SCRIPT-8 on its website. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Fortnite server suffered a minor outage, Epic Games was quick to address the issue Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games
Read more
  • 0
  • 0
  • 6133

article-image-sapfix-and-sapienz-facebooks-hybrid-ai-tools-to-automatically-find-and-fix-software-bugssapfix-and-sapienz-facebooks-hybrid-ai-tools-to-automatically-find-and-fix-software-bugs
Melisha Dsouza
14 Sep 2018
2 min read
Save for later

SapFix and Sapienz: Facebook’s hybrid AI tools to automatically find and fix software bugs

Melisha Dsouza
14 Sep 2018
2 min read
“Debugging code is drudgery” -Facebook Engineers Yue Jia, Ke Mao and Mark Harman To significantly reduce the amount of time developers spend on debugging code and rolling out new software, Facebook engineers have come up with an ingenious tool called ‘SapFix’. Sapfix, which is still under development, can automatically generate fixes for specific bugs  identified by Sapienz. It will then propose these fixes to engineers for approval and deployment to production. SapFix will eventually be able to operate independently from Sapienz, Facebook’s intelligent automated software testing tool. For now, it is a proof-of-concept that relies on the latter tool to pinpoint bugs. How does SapFix work? This AI hybrid tool will generate bug fixes depending upon the type of bug encountered. For instance: For simpler bugs: SapFix will create patches that revert the code submission that introduced these bugs. For complicated bugs: The tool uses a collection of “templated fixes” that were created by human engineers based on previous bug fixes. If human-designed template fixes aren’t up to the job: The tool attempts a “mutation-based fix,” which works by continuously making small modifications to the code that caused the software to crash, until a solution is found. SapFix generates multiple potential fixes for every bug. This is then submitted to the engineers for evaluation. The fixes are tested in advance so engineers can check if they might cause problems like compilation errors and other crashes. Source: Facebook With an automated end-to-end testing and repair, SapFix is an important milestone in AI hybrid tool deployment. Facebook intends to open source both, SapFix and Sapienz, once additional engineering work has been completed. You can read more about this tool on Facebook’s Blog. Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN How AI is going to transform the Data Center Facebook Reality Labs launch SUMO Challenge to improve 3D scene understanding and modeling algorithms  
Read more
  • 0
  • 0
  • 6122

article-image-mozilla-shares-how-av1-the-new-the-open-source-royalty-free-video-codec-works
Bhagyashree R
12 Nov 2018
5 min read
Save for later

Mozilla shares how AV1, the new open source royalty-free video codec, works

Bhagyashree R
12 Nov 2018
5 min read
Last month, Nathan Egge, a Senior Research Engineer at Mozilla explained technical details behind AV1 in depth at the Mile High Video Workshop in Denver. AV1 is a new open source royalty-free video codec that promises to help companies and individuals transmit high-quality video over the internet efficiently. AV1 is developed by the Alliance for Open Media (AOMedia), an association of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015. Mozilla joined AOMedia as a founding member. AV1 was created for a broad set of industry use cases such as video on demand/streaming, video conferencing, screen sharing, video game streaming, and broadcast. It is widely supported and adopted and gives at least 30% better than current generation video codecs. The alliance was able to hit a key milestone with the release of AV1 1.0.0 specification earlier this year in June. The codec has seen increasing interest from various companies, for instance, YouTube launched AV1 Beta Playlist in September. The following diagram shows the various stages in the working of a video codec: Source: YouTube We will cover the tools and algorithm used in some of these stages. Let’s see some of its technical details from Egge’s talk: AV1 Profiles Profiles specify the bit depth and subsampling formats supported. In AV1 there are three profiles: Main, High, and Professional which differ in terms of their bit-depth and chroma subsampling. The following table shows their bit-depth and chroma subsampling: Main High Professional Bit depth 8-bit and 10-bit 8-bit and 10-bit 8-bit, 10-bit, and 12-bit Chroma subsampling 4:0:0, 4:2:0 4:0:0, 4:2:0, and 4:4:4 4:0:0, 4:2:0, 4:2:2, and 4:4:4 High-level syntax In VP9 there is a concept of superframes that at some point becomes complicated. Superframes allows you to consolidate multiple coded frames into one single chunk. AV1 comes with high-level syntax that includes: sequence header, frame header, tile group, and tiles. Sequence header starts a video stream, frame headers are at the beginning of a frame, a tile group is an independent group of tiles, and finally, we have tiles which can be independently decoded. Source: YouTube Multi-symbol entropy coder Unlike VP9, which uses a tree-based boolean non-adaptive binary arithmetic encoder to encode all syntax elements, AV1 uses a symbol-to-symbol adaptive multi-symbol arithmetic coder. Each of its syntax element is a member of a specific alphabet of N elements, and a context is a set of N probabilities together with a count to facilitate fast early adaptation. Transform types In addition to DCT and ADST transform types, AV1 introduces two other transforms called flipped ADST and identity transform as extended transform types. Identity transform enables you to effectively code residual blocks with edges and lines. AV1 thus comes with the advantage of a total of sixteen horizontal and vertical transform type combinations. Intra prediction modes Along with the 8 main directional modes from VP9, up to 56 more directions are added but not all of them are available at smaller sizes. The following are some of the prediction modes introduced in AV1: Smooth H + V modes allow you to smoothly interpolate between values in the left column and last value in the above row. Palette mode is introduced to the intra coder as a general extra coding tool. It will be especially useful for artificial videos like screen capture and games, where blocks can be approximated by a small number of unique colors. The palette predictor for each plane of a block is depicted by: A color palette, with 2 to 8 colors Color indices for all pixels in the block Chroma from Luma (CfL) is a chroma-only intra predictor that models chroma pixels as a linear function of coincident reconstructed luma pixels. Source: YouTube First, the reconstructed luma pixels are subsampled into the chroma resolution, and then the DC component is removed to form the AC contribution. In order to approximate chroma AC component from the AC contribution, instead of requiring the decoder to imply scaling parameters, CfL determines the parameters based on the original chroma pixels and signals them in the bitstream. As a result, this reduces decoder complexity and yields more precise predictions. As for the DC prediction, it is computed using intra DC mode, which is sufficient for most chroma content and has mature fast implementations. Constrained Directional Enhancement Filter (CDEF) CDEF is a detail-preserving deringing filter, which is designed to be applied after deblocking. It works by estimating edge directions followed by applying a non-separable non-linear low-pass directional filter of size 5×5 with 12 non-zero weights. In order to avoid extra signaling, the decoder uses a normative fast search algorithm to compute the direction per 8×8 block that minimizes the quadratic error from a perfect directional pattern. Film Grain Synthesis In AV1, film grain synthesis is a normative post-processing applied outside of the encoding/decoding loop. Film grain is abundant in TV and movie content, which needs to be preserved while encoding. But, its random nature makes it difficult to compress with traditional coding tools. In film grain synthesis, the grain is removed from the content before compression, its parameters are estimated and then sent in the AV1 bitstream. The grain is then synthesized based on the received parameters and added to the reconstructed video. For grainy content, film grain synthesis significantly reduces the bitrate necessary to reconstruct the grain with sufficient quality. You can watch Into the Depths The Technical Details behind AV1 by Nathan Egge on YouTube: https://www.youtube.com/watch?v=On9VOnIBSEs&t=463s Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available
Read more
  • 0
  • 0
  • 6119
article-image-libp2p-the-modular-p2p-network-stack-by-ipfs-for-better-decentralized-computing
Melisha Dsouza
09 Oct 2018
4 min read
Save for later

libp2p: the modular P2P network stack by IPFS for better decentralized computing

Melisha Dsouza
09 Oct 2018
4 min read
libp2p is a P2P Network stack introduced by the IPFS community. libp2p is capable of discovering other peers and networks without resourcing to centralized registries that enables apps to work offline. In July 2018, Davis Dias explained that the design of a 'location addressed web' is the reason for its fragility. Small errors in its backbone can lead to shutting down of all running applications. Firewalls, routing issues, roaming issue, and network reliability interfere with users having a smooth experience on the web. Thus came a need to re-imagine the network stack. To solve all the above problems, the InterPlanetary File System (IPFS) came into being. It is a decentralized web protocol based on content-addressing, digital signatures, and peer-to-peer distribution. Today, IPFS is used to build completely distributed (and offline-capable!) web-apps which are also available offline. IPFS saves and distributes valuable datasets, and moves billions of files. IPFS spawned several other projects and libp2p is one of them. It enables users to run network applications free from runtime and address services while being independent of their location. libp2p solves the complexity of dealing with numerous protocols in a decentralized environment. It effectively helps users connect with multiple peers using only a single protocol thus paving the way for the next generation of decentralized systems. Libp2p Features #1 Transport Module libp2p enables application developers to pick the modules needed to run their application. These modules vary depending on the runtime they are executing. A libp2p node uses one or more Transports to dial and listen for connections. These transport modules offer a clean interface for dialing and listening which is defined by the interface-transport specification. #2 No prior assigning of ports Before libp2p came into existence, users would assign a listener to a port and then assign ports to special protocols. This was done so that other hosts would know in advance which port to dial. With libp2p users do not have to assign ports beforehand. #3 Encrypted communication To ensure an encrypted connection, libp2p also supports a set of modules that encrypt every communication established. #4 Peer Discovery and Routing A peer discovery module helps libp2p to find peers to connect to. Peer routing finds other peers in the network by intentionally issuing queries, which can be iterative or recursive, until a peer is found. Content routing mechanism is used to find where content lives in the network. Using libp2p in IPFS libp2p is now refactored into its own project so that other users can take advantage of it and be part of its ecosystem as well. It is what provides IPFS and other projects the P2P connectivity, support for multiple platforms and browsers and many other advantages. Users can utilize the libp2p module to create their own libp2p bundle. They can customize their bundles with features and default setup. It also takes into account a user's needs. For example, the team has built a browser working version of libp2p that acts as the network layer of IPFS and leverages browser transports. You can head over to GitHub to check this example. Keep Networks has also demonstrated the use of libp2p. Since participants need to know how to connect to each other, the team has come up with a simple example of peer-to-peer discovery. They have used a few pieces of the libp2p JS library to create nodes that discover and communicate with each other. You can head over to their blog to check out how the example works. Another emerging use for libP2P is in blockchain applications. IPFS is used by blockchains and blockchain applications, and its subprotocols (libp2p, multihash, IPLD) can be extremely useful for blockchain standardization. A good  example of this would be getting the ethereum blockchain in the browser or in a Node.js process using libp2p and running it through ethereum-vm. That being said, there are multiple challenges that developers will encounter while using libP2P for their Blockchain examples. Chris Pacia, the backend developer for OB1, explains how developers can face these challenges in his talk at QCon. With all the buzz around blockchains and decentralized computing these days, libp2p is making its rounds on the internet. For more insights on libp2p, you can visit their official site. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed
Read more
  • 0
  • 0
  • 6110

article-image-redox-os-will-soon-permanently-run-rustc-the-compiler-for-the-rust-programming-language-says-redox-creator-jeremy-soller
Vincy Davis
29 Nov 2019
4 min read
Save for later

Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller

Vincy Davis
29 Nov 2019
4 min read
Two days ago, Jeremy Soller, the Redox OS BDFL (Benevolent dictator for life) shared recent developments in Redox which is a Unix-like operating system written in Rust. The Redox OS team is pretty close to running rustc, the compiler for the Rust programming language on Redox. However, dynamic libraries are a remaining area that needs to be improved. https://twitter.com/redox_os/status/1199883423797481473 Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications. In March this year, Redox OS 0.50 was released with support for Cairo, Pixman, and other libraries and packages. Ongoing developments in Redox OS Soller says that he has been running the Redox OS on a System76 Galago Pro (galp3-c) along with the System76 Open Firmware and has found the work satisfactory till now. “My work on real hardware has improved drivers and services, added HiDPI support to a number of applications, and spawned the creation of new projects such as pkgar to make it easier to install Redox from a live disk,” quotes Soller in the official Redox OS news page. Furthermore, he notified users that Redox has also become easier to cross-compile since the redoxer tool can now build, run, and test. It can also automatically manage a Redox toolchain and run executables for Redox inside of a container on demand. However, “compilation of Rust binaries on Redox OS”, is one of the long-standing issues in Redox OS, that has garnered much attention for the longest time. According to Soller, through the excellent work done by ids1024, a member of the GSoC Project, Readox OS had almost achieved self-hosting. Later, the creation of the relibc (a C library written in Rust) library and the subsequent work done by the contributors of this project led to the development of the POSIX C compatibility library. This gave rise to a significant increase in the number of available packages. With a large number of Rust crates suddenly gaining Redox OS support, “it seemed that as though the dream of self-hosting would soon be reality”, however, after finding some errors in relibc, Soller realized, “rustc is no longer capable of running statically linked!”  Read More: Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Finally, the team shifted its focus to relibc’s ld_so which provides dynamic linking support for executables. However, this has caused a temporary halt to porting rustc to Redox OS. Building Redox OS on Redox OS is one of the highest priorities of the Redox OS project. Soller has assured its users that Rustc is a few months away from being run permanently. He also adds that with Redox OS being a microkernel, it is possible that even the driver level could be recompiled and respawned without downtime, which will make the operating system exceedingly fast to develop. In the coming months, he will be working on increasing the efficiency of porting more software and tackling more hardware support issues. Eventually, Soller hopes that he will be able to successfully develop Redox OS which would be a fully self-hosted, microkernel operating system written in Rust. Users are excited about the new developments in Redox OS and have thanked Soller for it. One Redditor commented, “I cannot tell you how excited I am to see the development of an operating system with greater safety guarantees and how much I wish to dual boot with it when it is stable enough to use daily.” Another Redditor says, “This is great! Love seeing updates to this project 👍” https://twitter.com/flukejones/status/1200225781760196609 Head over to the official Redox OS news page for more details. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Homebrew 2.2 releases with support for macOS Catalina ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 6104

article-image-react-native-0-60-releases-with-accessibility-improvements-androidx-support-and-more
Bhagyashree R
04 Jul 2019
4 min read
Save for later

React Native 0.60 releases with accessibility improvements, AndroidX support, and more

Bhagyashree R
04 Jul 2019
4 min read
Yesterday, the team behind React Native announced the release of React Native 0.60. This release brings accessibility improvements, a new app screen, AndroidX support, CocoaPods in iOS by default, and more. Following are some of the updates introduced in React Native 0.60: Accessibility improvements This release ships with several improvements to accessibility APIs both on Android and iOS. As the new features directly use APIs provided by the underlying platform, they’ll easily integrate with native assistance technologies. Here are some of the accessibility updates to React Native 0.60: A number of missing roles have been added for various components. There’s a new Accessibility States API for better web support in the future. AccessibilityInfo.announceForAccessibility is now supported on Android. Extended accessibility actions will now include callbacks that deal with accessibility around user-defined actions. iOS accessibility flags and reduce motion are now supported on iOS. A clickable prop and an onClick callback are added for invoking actions via keyboard navigation. A new start screen React Native 0.60 comes with a new app screen, which is more user-friendly. It shows useful instructions like editing App.js, links to the documentation, how you can start the debug menu, and also aligns with the upcoming website redesign. https://www.youtube.com/watch?v=ImlAqMZxveg CocoaPods are now part of React Native's iOS project React Native for iOS now comes with CocoaPods by default, which is an application level dependency manager for Swift and Objective-C Cocoa projects. Developers are recommended to open the iOS platform code using the ‘xcworkspace’ file from now on. Additionally, the Pod specifications for the internal packages have been updated to make them compatible with the Xcode projects, which will help with troubleshooting and debugging. Lean Core removals In order to bring the React Native repository to a manageable state, the team started the Lean Core project. As a part of this project, they have extracted WebView and NetInfo into separate repositories. With React Native 0.60, the team has finished migrating them out of the React Native repository. Geolocation has also been extracted based on the community feedback about the new App Store policy. Autolinking for iOS and Android React Native libraries often consist of platform-specific or native code. The autolinking mechanism enables your project to discover and use this code. With this release, the React Native CLI team has made major improvements to autolinking. Developers using React Native before version 0.60, are advised to unlink native dependencies from a previous install. Support for AndroidX (Breaking change) With this release, React Native has been migrated to AndroidX (Android Extension library). As this is a breaking change, developers need to migrate all their native code and dependencies as well. The React Native community has come up with a temporary solution for this called “jetifier”, an AndroidX transition tool in npm format, with a react-native compatible style. Many users are excited about the release and considered it to be the biggest RN release. https://twitter.com/cipriancaba/status/1146411606076792833 Other developers shared some tips for migrating to AndroidX, which is an open source project that maps the original support library API packages into the androidx namespace. We can’t use both AndroidX and the old support library together, which means “you are either all in or not in at all.” Here’s a piece of good advice shared by a developer on Reddit: “Whilst you may be holding off on 0.60.0 until whatever dependency you need supports X you still need to make sure you have your dependency declarations pinned down good and proper, as dependencies around the react native world start switching over if you automatically grab a version with X when you are not ready your going to get fun errors when building, of course this should be a breaking change worthy of a major version number bump but you never know. Much safer to keep your versions pinned and have a googlePlayServicesVersion in your buildscript (and only use libraries that obey it).” Considering this release has major breaking changes, others are also suggesting to wait for some time till 0.60.2 comes out. “After doing a few major updates, I would suggest waiting for this update to cool down. This has a lot of breaking changes, so I would wait for at least 0.60.2 to be sure that all the major requirements for third-party apps are fulfilled ( AndroidX changes),” a developer commented on Reddit. Along with these exciting updates, the team and community have introduced a new tool named Upgrade Helper to make the upgrade process easier. To know more in detail check out the official announcement. React Native VS Xamarin: Which is the better cross-platform mobile development framework? Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]  
Read more
  • 0
  • 0
  • 6088
article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 6087

article-image-parasail-8-0-released-with-a-new-debugger-compiler-and-language-principle-designs-among-others
Amrata Joshi
11 Feb 2019
4 min read
Save for later

ParaSail 8.0 released with a new debugger, compiler, and language principle designs among others

Amrata Joshi
11 Feb 2019
4 min read
Last week, the team at ParaSail, released a new version of the parallel programming language, ParaSail 8.0 (ParaSail stands for Parallel Specification and Implementation Language). This programming language is designed for supporting the development of inherently safe and parallel applications that can be mapped to multicore, heterogeneous, or distributed architectures. It provides support for both implicit and explicit parallelism. All the ParaSail expressions are defined to have parallel evaluation semantics. What’s new in ParaSail 8.0 Debugger This release comes with an interactive debugger that is automatically invoked when the interpreter encounters a precondition, assertion, or postcondition that fails at run-time.  This release comes with fully analyzed pre- and postconditions that are checked at run-time. ParaSail LLVM-based Compiler This release comes with a translator that translates PSVM (ParaSail virtual machine) instructions to LLVM (Low-Level Virtual Machine) instructions, and from there to object code. Language design principles According to the new design principles, the language should be easy to read. The readability should be emphasized over symbols and should be similar to existing languages, mathematics, or logic. As the programs are usually scanned backward, so ending indicators should be as informative as starting indicators for composite constructs. For example, “end loop” or “end class Stack” rather than simply “end” or “}”. Parallelism should be built into the language so that resulting programs can easily take advantage of as many cores as are available on the host computer. Features that are error-prone or that can complicate the testing or proof process should be eliminated. Language-defined types and user-defined types should use the same syntax and have the same capabilities. All the modules should be generic templates or equivalent. The language should be safe and the compiler should detect all potential race conditions as well as all potential runtime errors. Enhanced ParaSail syntax In this release, the back-quote character followed by a parenthesized expression may now appear within a string literal. Also, the value of the expression is interpolated into the middle of the string, in place of the back-quoted expression. Reserved words A list of words is now reserved in ParaSail. Few words from this list are, abs, abstract, all, and, block, case, class, concurrent, const, continue, each, else, elsif, end, exit, extends. Object reference Now a reference to an existing object can be declared using the following syntax: object_reference_declaration ::= ’ref’ [ var_or_const ] identifier [’:’ type_specifier ] ’=>’ object_name ’;’ Deprecations ParaSail has removed a few of the features for ensuring safe parallelism: The global variables have been removed so that operations may only access variables passed as parameters. The parameter aliasing has been eliminated so that two parameters passed to the same operation don’t refer to the same object if one of the parameters is updatable within the operation. Pointers have been removed so that optional and expandable objects and generalized indexing can provide an approach that allows safe parallelization. Run-time exception handling has been eliminated so that it is possible for strong compile-time checking of preconditions and establish support for parallel event-handling. The global garbage-collected heap has been removed so that  automatic storage management is provided. Explicit threads, lock/unlock, or signal/wait has been eliminated so that parallel activities are identified automatically by the compiler. Many users are not much happy with this news. Some  are unhappy with the CSS and are asking the team to fix it. One of the comments on HackerNews reads, “Please fix the CSS: I have to scroll horizontally every single line. I stopped at the first one. Tested with Firefox and Chrome on Android. Firefox reader mode doesn't work on that site.” Another user commented, “I was able to read it on my Android device in Chrome by using landscape mode. Until I scrolled down a little. Then a huge static navigation popup appeared taking up 40% of the screen!” Few others think that Fortran is better than ParaSail as it lets developers to name the loops. Some others are excited about pre/post conditions. One of the users commented, “Having built in pre/post conditions is pretty nice.” Read more about this news on ParaSail’s official website. Racket 7.2, a descendent of Scheme and Lisp, is now out! Typescript 3.3 is finally released! Announcing Julia v1.1 with better exception handling and other improvements
Read more
  • 0
  • 0
  • 6082