Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-how-you-can-replace-a-hot-path-in-javascript-with-webassembly
Bhagyashree R
15 Feb 2019
5 min read
Save for later

How you can replace a hot path in JavaScript with WebAssembly

Bhagyashree R
15 Feb 2019
5 min read
Yesterday, Das Surma, a Web Advocate at Google, shared how he and his team replaced a JavaScript hot path in the Squoosh app with WebAssembly. Squoosh is an image compression web app which allows you to compress images with a variety of codecs that have been compiled from C++ to WebAssembly. Hot paths are basically code execution paths where most of the execution time is spent. With this update, they aimed to achieve predictable performance across all browsers. Its strict typing and low-level architecture enable more optimizations during compilation. Though JavaScript can also achieve similar performance to WebAssembly, it is often difficult to stay on the fast path. What is WebAssembly? WebAssembly, also known as Wasm, provides you with a way to execute code written in different languages at near-native speed on the way. It is a low-level language with a compact binary format, which provides C/C++/Rust as the compilation target so that they can run on the web. When you compile a C or Rust code to WebAssembly, you get a .wasm file. This file contains something called “module declaration”. In addition to the binary instructions for the functions contained within, it contains all the imports the module needs from its environment and a list of exports this module provides to the host. Comparing the file size generated To narrow down the language, Surma gave an example of a JavaScript function that rotates an image by multiples of 90 degrees. This function basically iterates over every pixel of an image and copies it to a different location. This function was written in three different languages, C/C++, Rust, AssemblyScript, and was compiled to WebAssembly. C and Emscripten Emscripten is a C compiler that allows you to easily compile your C code to WebAssembly. After porting the entire JavaScript code to C and compiling it with emcc, Emscripten creates a glue code file called c.js and wasm module called c.wasm. The wasm module gzipped to almost 260 bytes and the c.js file was of the size 3.5 KB. Rust Rust is a programming language syntactically similar to C++. It is designed to provide better memory and thread-safety. The Rust team has introduced various tooling to the WebAssembly ecosystem, and one of them is wasm-pack. With the help of wasm-pack, developers can turn their code into modules that work out-of-the-box with bundlers like Webpack. After compiling the Rust code using wasm-pack, a 7.6 KB wasm module was generated with about 100 bytes of glue code. AssemblyScript AssemblyScript compiles a strictly-typed subset of TypeScript to WebAssembly ahead of time. It uses the same syntax as TypeScript but switches the standard library with its own. This essentially means that you can’t just compile any TypeScript to WebAssembly, but you don’t have to learn a new programming language to write WebAssembly. After installing the AssemblyScript file, with the help of the AssemblyScript/assemblyscript npm package, AssemblyScript provides with a wasm module of at least 300 bytes and no glue code. The module can directly work with vanilla WebAssembly APIs. Comparing the size of files generated by compiling the above three languages, Rust gave the biggest file. Comparing the performance To analyze the performance, the team did speed comparison per language and speed comparison per browser. They shared the results in the following two graphs: Source: Google Developers The graphs show that all the WebAssembly modules were executed in ~500ms or less, which proves that WebAssembly gives a predictable performance. Regardless of which language you choose, the variance between browsers and languages is minimal. The standard deviation of JavaScript across all browsers is ~400ms. And, the standard deviation of all our WebAssembly modules across all browsers is ~80ms. Which language you should choose if you have a JS hot path and want to make it faster with WebAssembly? Looking at the above results, the best choice seems to be C or AssemblyScript, but they decided to go with Rust. They narrowed down to Rust because all the codecs shipped in Squoosh so far are compiled using Emscripten and the team wanted to broaden their knowledge about the WebAssembly ecosystem by using a different language. They did not choose AssemblyScript because it is relatively new and the compiler is not as mature as Rust. The file size difference between Rust and other languages were quite huge but in reality, this is not a big deal. Going by the runtime performance, Rust showed a faster average across browsers than AssemblyScript. Additionally, Rust will be more likely to produce faster code without requiring any manual code optimizations. To read more in detail, check out Surma’s post on Google Developers. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Creating and loading a WebAssembly module with Emscripten’s glue code [Tutorial] The elements of WebAssembly – Wat and Wasm, explained [Tutorial]
Read more
  • 0
  • 0
  • 3739

article-image-tensorflow-data-validation-tfdv-automates-and-scales-data-analysis-validation-and-monitoring
Bhagyashree R
11 Sep 2018
2 min read
Save for later

TensorFlow announces TensorFlow Data Validation (TFDV) to automate and scale data analysis, validation, and monitoring

Bhagyashree R
11 Sep 2018
2 min read
Today the TensorFlow team announced the launch of TensorFlow Data Validation (TFDV), an open-source library that enables developers to understand, validate, and monitor their machine learning data at scale. Why is TensorFlow Data Validation introduced? While building machine learning algorithms a lot of attention is paid on improving their performance. However, if the input data is wrong, all this optimization effort goes to waste. Understanding and validating small amount of data is easy, you can do it manually as well. However, in the real-world this is not the case. Data in production is huge and often arrives continuously and in big chunks. This is why, it is necessary to automate and scale the tasks of data analysis, validation, and monitoring. What are some features of TFDV? TFDV is part of the TensorFlow Extended (TFX) platform, a TensorFlow-based general-purpose machine learning platform. It is already being used by Google every day to analyze and validate petabytes of data. TFDV provides some of the following features: It can compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. It includes tools such as Facets Overview, which provides a visualization of the computed statistics for easy browsing. Data-schema can be generated automatically to describe expectations about data such as required values, ranges, and vocabularies. Since writing a schema can be a tedious task for datasets with lots of features, TFDV provides a method to generate an initial version of the schema based on the descriptive statistics. You can inspect the schema with the help of schema viewer. You can identify anomalies such as missing features, out-of-range values, or wrong feature types with Anomaly detection. Provides an anomalies viewer so that you can see what features have anomalies and learn more in order to correct them. To learn more on how it is used in production, read the official announcement by TensorFlow on Medium and also check out TFDV’s GitHub repository. Why TensorFlow always tops machine learning and artificial intelligence tool surveys TensorFlow 2.0 is coming. Here’s what we can expect. Can a production ready Pytorch 1.0 give TensorFlow a tough time?
Read more
  • 0
  • 0
  • 3739

article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 3739

article-image-like-newspapers-google-algorithms-are-protected-by-the-first-amendment
Savia Lobo
10 Sep 2018
4 min read
Save for later

Like newspapers, Google algorithms are protected by the First amendment making them hard to legally regulate them

Savia Lobo
10 Sep 2018
4 min read
At the end of last month, Google denied U.S President Donald Trump’s accusatory tweet which said it’s algorithms favor liberal media outlets over right-wing ones. Trump’s accusations hinted at Google regulating the information that comes up in Google searches. However, governing or regulating algorithms and the decisions they make about which information should be provided and prioritized is a bit tricky. Eugene Volokh, a University of California-Los Angeles law professor and author of a 2012 white paper on the constitutional First Amendment protection of search engines, said, “Each search engine’s editorial judgment is much like many other familiar editorial judgments.” A similar scenario of a newspaper case from 1974 sheds light on what the government can control under the First Amendment, companies’ algorithms and how they produce and organize information. On similar lines, Google too has the right to protect its algorithms from being regulated by the law. Google has the right to protect algorithms, based on a 1974 case According to Miami Herald v. Tornillo 1974 case, the Supreme Court struck down a Florida law that gave political candidates the “right of reply” to criticisms they faced in newspapers. The law required the newspaper to publish a response from the candidate, and to place it, free of charge, in a conspicuous place. The candidate’s lawyers contended that newspapers held near monopolistic roles when it came to reaching audiences and that compelling them to publish responses was the only way to ensure that candidates could have a comparable voice. The 1974 case appears similar to the current scenario. Also, if Google’s algorithms are manipulated, those who are harmed will have comparatively limited tools through which to be heard. Back then, Herald refused to comply with the law. Its editors argued that the law violated the First Amendment because it allowed the government to compel a newspaper to publish certain information. The Supreme Court too agreed with the Herald and the Justices explained that the government cannot force newspaper editors “to publish that which reason tells them should not be published.” Why Google cannot be regulated by law Similar to the 1974 case, Justices used the decision to highlight that the government cannot compel expression. They also emphasized that the information selected by editors for their audiences is part of a process and that the government has no role in that process. The court wrote, “The choice of material to go into a newspaper and the decisions as to limitations on size and content of the paper, and treatment of public issues and public officials—fair or unfair—constitute the exercise of editorial control and judgment.” According to two federal court decisions, Google is not a newspaper and algorithms are not human editors. Thus, a search engine or social media company’s algorithm-based content decisions should not be protected in similar ways as those made by newspaper editors. The judge explained, “Here, the process, which involves the . . . algorithm, is objective in nature. In contrast, the result, which is the PageRank—or the numerical representation of relative significance of a particular website—is fundamentally subjective in nature.” Ultimately, the judge compared Google’s algorithms to the types of judgments that credit-rating companies make. These firms have a right to develop their own processes and to communicate the outcomes. Comparison of both journalistic protections and algorithms, was conducted in a Supreme Court’s ruling in Citizens United v. FEC in 2010. The case focused on the parts of the Bipartisan Campaign Reform Act that limited certain types of corporate donations during elections. Citizens United, which challenged the law, is a political action committee. Chief Justice John Roberts explained that the law, because of its limits on corporate spending, could allow the government to halt newspapers from publishing certain information simply because they are owned by corporations. This can also harm public discourse. Any attempt to regulate Google’s and other corporations’ algorithmic outputs would have to overcome: The hurdles the Supreme Court put in place in the Herald case regarding compelled speech and editorial decision-making, The Citizens United precedent that corporate speech, which would also include a company’s algorithms, is protected by the First Amendment. Read more about this news in detail on Columbia Journalism Review. Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology North Korean hacker charged for WannaCry ransomware and for infiltrating Sony Pictures Entertainment California’s tough net neutrality bill passes state assembly vote
Read more
  • 0
  • 0
  • 3738

article-image-nips-finally-sheds-its-sexist-name-for-neurips
Natasha Mathur
19 Nov 2018
4 min read
Save for later

NIPS finally sheds its ‘sexist’ name for NeurIPS

Natasha Mathur
19 Nov 2018
4 min read
The ‘Neural Information Processing Systems’, or ‘NIPS’, a well-known machine learning and computational neuroscience conference adopted ‘NeurIPS’ as an alternative acronym for the conference, last week. The acronym ‘NIPS’ had been under the spotlight worldwide over the past few years as some members of the community thought of the acronym as “sexist” and pointed out that it is offensive towards women. “Something remarkable has happened in our community. The name NeurIPS has sprung up organically as an alternative acronym, and we’re delighted to see it being adopted”, mentioned the NeurIPS team. NIPS team also added that they have taken a couple of measures to support the new acronym. This is why all signage and the program booklet for the 2018 meeting will have either the full conference name or NeurIPS mentioned to refer to the conference. Sponsors have also been asked to make sure that they make the required changes within their document materials. A branding company has also been hired to get a new logo designed for the conference. Moreover, the conference site has been moved to neurips.cc. “one forward-thinking member of the community purchased neurips.com and described the site’s purpose as ‘host[ing] the conference content under a different acronym... until the board catches up,” as mentioned on NeurIPS news page. NIPS organizers had conducted a  poll, back in August, on the NIPS website asking people if they agree or disagree with the name change. Around 30% of the respondents had answered that they support the name change (28% males and about 44% females) while 31% ‘strongly disagreed’ with the name change proposal (31% male and 25% female). This had led to NIPS keeping the name as it is. However, many people were upset by the board’s decision, and when the emphasis on a name change within the community became evident, the name got revised. One such person who was greatly dissatisfied with the decision was Anima Anandkumar, director of Machine Learning at Nvidia, who had started a petition on change.org last month. The petition managed to gather 1500 supporters as of today. “The acronym of the conference is prone to unwelcome puns, such as the perhaps subversively named pre-conference “TITS” event and juvenile t-shirts such as “my NIPS are NP-hard”, that add to the hostile environment that many ML researchers have unfortunately been experiencing” reads the petition. Anima pointed out that some of these incidents trigger uncomfortable memories for many researchers who have faced harassing behavior in the past. Moreover, Anandkumar tweeted out with #ProtestNIPS in support of the conference changing its name, which received over 300 retweets. https://twitter.com/AnimaAnandkumar/status/1055262867501412352 After the board’s decision to rebrand the name, Anandkumar tweeted out thanking everyone for their support for #protestNIPS. “ I wish we could have started with a clean slate and done away with problematic legacy, but this is a compromise. I hope we can all continue to work towards better inclusion in #ml”. Other than Anandkumar, many other people had been equally active in amplifying the support for #protestNIPS. People in support of #protestNIPS Jeff Dean, head of Google AI Dean had tweeted in support of Anandkumar, saying that NIPS should take the issue of name change seriously: https://twitter.com/JeffDean/status/1055289282930176000 https://twitter.com/JeffDean/status/1063679694283857920 Dr. Elana J Fertig, Associate Professor of Applied Mathematics, Johns Hopkins Elana had also tweeted in support of #protestNIPS. “These type of attitudes cannot be allowed to prevail in ML. Women need to be welcome to these communities. #WomenInSTEM” https://twitter.com/FertigLab/status/1063908809574354944 Daniela Witten, professor of (bio)statistics, University of Washington Witten tweeted saying: “I am so disappointed in @NipsConference for missing the opportunity to join the 21st century and change the name of this conference. But maybe the worst part is that their purported justification is based on a shoddy analysis of their survey results”. https://twitter.com/daniela_witten/status/1054800517421924352 https://twitter.com/daniela_witten/status/1054800519607181312 https://twitter.com/daniela_witten/status/1054800521582731264 “Thanks to everyone who has taken the time to share thoughts and concerns regarding this important issue. We were considering alternative acronyms when the community support for NeurIPS became apparent. We ask all attendees this year to respect this solution from the community and to use the new acronym in order that the conference focus can be on science and ideas”, mentioned the NeurIPS team. NIPS 2017 Special: Decoding the Human Brain for Artificial Intelligence to make smarter decisions NIPS 2017 Special: A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey
Read more
  • 0
  • 0
  • 3738

article-image-google-chrome-experiment-crashes-browsers-thousands-it-admins-worldwide
Sugandha Lahoti
18 Nov 2019
4 min read
Save for later

Google Chrome 'secret' experiment crashes browsers of thousands of IT admins worldwide

Sugandha Lahoti
18 Nov 2019
4 min read
On Thursday last week, thousands of IT admins were left aghast when their Google Chrome browsers went blank, the White Screen of Death, and effectively crashed the browser. This was because Google was silently experimenting with a new WebContents Occlusion feature. The WebContents Occlusion feature is designed to suspend Chrome tabs when you move other apps on top of them and reduce resource usage when the browser isn’t in use. This feature is expected to reduce battery usage (for Chrome and other apps running on the same machine). This feature had been under testing in Chrome Canary and Chrome Beta releases. However last week, Google decided to test it in the main stable release, so it could get more feedback on how it behaved. "The experiment/flag has been on in beta for ~5 months," said David Bienvenu, a Google Chrome engineer in a Chromium bug thread. "It was turned on for stable (e.g., M77, M78) via an experiment that was pushed to released Chrome Tuesday morning." The main issue was that this experiment was released silently to the stable release, without IT admins or users being warned about Google’s changes. Naturally, Chrome users were left confused and lashed out their anger and complaints on Google Chrome’s support forum. Business users who were affected included those that run Chrome on Windows Server "terminal server" environments and on Citrix servers. Due to browser-crashing, employees working in tightly controlled enterprise environments were unable to switch browsers impacting business-critical functionality. After multiple complaints from businesses and users, Google rolled back the change late on Thursday night. “I’ll rollback the launch of this experiment and try to figure out how to deal with Citrix,” noted Bienvenu in the bug thread. Later a new Chrome configuration file was pushed out to users. "I believe it's more of a pull than a push thing," Bienvenu said, "so once the update is live on the Google servers, the next time you launch Chrome, you should get the new config. Google's Chrome experiment left ID admins confused Many IT admins were also angry that they’ve wasted valuable resources and time on trying to fix issues in their environment thinking it was their own fault. “We spent the better part of yesterday trying to determine if an internal change had occurred in our environment without our knowledge”, wrote an angry user. “We did not realize this type of event could occur on Chrome unbeknownst to us. We are already discussing alternative options, none of them are great, but this is untenable.", writes an angry user. Others urged Google that they should be allowed to opt out of any Google Chrome experiments. “Would like to be excluded from further experimental changes. We have had the sporadic white screen of deaths over the past few weeks. How would we have ever known it was a part of the 1%?  We chalked it off as bad Chrome profiles. We still have fresh memories of the experimental Chrome sound issue. That was very disruptive too. Please test your changes in your internal rdsh/Citrix environment. Please give us the option to opt out of "experimental" changes.  Thank you for your consideration.” Another said, “We've been having random issues for quite some time, and our agents could be in this 1%. This last one was a huge impact on our customer-facing agents, not to mention working all day yesterday and today of troubleshooting. Is there a way to be excluded from these experimental changes? If Chrome is going to be an enterprise browser, we need stability.” With Google Chrome’s mishap, more people are advocating moving to different browsers that give more control to its end users. Chrome also came under fire recently when it started experimenting with Manifest V3 extension in Chrome 80 Canary build. Chrome’s ad-blocking changes received overwhelmingly negative feedback as it can stop the working of many popular ad-blockers. Other browsers are also popping up now and then which offer better user privacy and ad-blocking features - Brave 1.0 being the latest in the line. Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform Google starts experimenting with Manifest V3 extension in Chrome 80 Canary build. Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastky, Intel and Red Hat partnership.
Read more
  • 0
  • 0
  • 3738
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-suse-is-now-an-independent-company-after-being-acquired-by-eqt-for-2-5-billion
Amrata Joshi
18 Mar 2019
3 min read
Save for later

SUSE is now an independent company after being acquired by EQT for $2.5 billion

Amrata Joshi
18 Mar 2019
3 min read
Last week, SUSE, an open-source software company that develops and sells Linux products to business customers announced that it is an independent company now. SUSE also finalized its $2.5 billion acquisition by growth investor EQT from Micro Focus. According to the official post, SUSE also claims to be “the largest independent open source company.” Novell, a software and services based company, had first acquired SUSE in 2004. Novell then got acquired by Attachmate in 2010, which was then acquired by Micro Focus in 2014. Micro Focus then turned Suse into an independent division and further sold SUSE to EQT in 2018. The newly independent SUSE has brought in addition to its team by adding new leadership roles. Enrica Angelone has joined as SUSE’s Chief Financial Officer, and Sander Huyts, director of sales at SUSE, is the new Chief Operations Officer. Thomas Di Giacomo, former CTO for SUSE, is now the president of Engineering, Product and Innovation. According to SUSE’s blog post, SUSE’s expanded team will be actively participating in communities and projects to bring open source innovation to the enterprise. Nils Brauckmann, CEO at SUSE, said, “ Our genuinely open, open source solutions, flexible business practices, lack of enforced vendor lock-in and exceptional service are more critical to customer and partner organizations, and our independence coincides with our single-minded focus on delivering what is best for them.” He further added, “Our ability to consistently meet these market demands creates a cycle of success, momentum and growth that allows SUSE to continue to deliver the innovation customers need to achieve their digital transformation goals and realize the hybrid and multi-cloud workload management they require to power their own continuous innovation, competitiveness, and growth.” SUSE’s new move is towards capitalizing on market dynamics, creating tremendous value for customers and partners. SUSE’s independent status and EQT’s backing will enable the company’s continued expansion towards driving growth in SUSE’s core business and in emerging technologies, both organically and through add-on acquisitions. As the company has been owned by EQT, so according to few users it’s still not independent. One of the users commented on HackerNews, “Being owned by a Private Equity fund can really not be described as being "independent". Such funds have a typical investment horizon of 5 - 7 years, with potential exits being an IPO, a strategic sale (to a bigger company) or a sale to another PE fund, with the strategic sale probably more typical. In the meantime the fund will impose strict growth targets and strong cost cuts.” Another comment reads, “Yeah, I'm not sure how anyone can call private equity "independent". Our whole last year had selling the company as our top priority. Not something I'd choose in a truly independent position.” To know more about this news, check out SUSE’s official announcement. Google introduces Season of Docs that will connect technical writers and mentors with open source projects Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process  
Read more
  • 0
  • 0
  • 3737

article-image-react-devtools-4-0-releases-with-support-for-hooks-experimental-suspense-api-and-more
Bhagyashree R
16 Aug 2019
3 min read
Save for later

React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!

Bhagyashree R
16 Aug 2019
3 min read
Yesterday, the React team announced the release of React DevTools 4.0 for Chrome, Firefox, and Edge. In addition to better performance and navigation experience, this release fully supports React Hooks and provides a way to test the experimental Suspense API. Key updates in React DevTools 4.0 Better performance by reducing the “bridge traffic” The React DevTools extension is made up of two parts: frontend and backend. The frontend portion includes the components tree, the Profiler, and all the other things that are visible to you. On the other hand, the backend portion is the one that is invisible. This portion is in charge of notifying the frontend by sending messages through a “bridge”. In previous versions of React DevTools, the traffic caused by this notification process was one of the biggest performance bottlenecks. Starting with React DevTools 4.0, the team has tried to reduce this bridge traffic by minimizing the amount of messages sent by the backend to render the Components tree. The frontend can request more information whenever required. Automatically logs React component stack warnings React DevTools 4.0 now provides an option to automatically append component stack information to the console in the development phase. This will enable developers to identify where exactly in the component tree failure has happened. To disable this feature just navigate to the General settings panel and uncheck the “Append component stacks to warnings and errors.” Source: React Components tree updates Improved hooks support: Hooks allow you to use state and other React features without writing a class. In React DevTools 4.0, hooks have the same level of support as props and state. Component filters: Navigating through large component trees can often be tiresome. Now, you can easily and quickly find the component you are looking for by applying the component filters. "Rendered by" list and an owners tree: React DevTools 4.0 now has a new "rendered by" list in the right-hand pane that will help you quickly step through the list of owners. There is also an owners tree, the inverse of the "rendered by" list, which lists all the things that have been rendered by a particular component. Suspense toggle: The experimental Suspense API allows you to “suspend” the rendering of a component until a condition is met. In <Suspense> components you can specify the loading states when components below it are waiting to be rendered. This DevTools release comes with a toggle to let you test these loading states. Source: React Profiler changes Import and export profiler data: The profiler data can now be exported and shared among other developers for better collaboration. Source: React Reload and profile: React profiler collects performance information each time the application is rendered. This helps you identify and rectify any possible performance bottlenecks in your applications. In previous versions, DevTools only allowed profiling a “profiling-capable version of React.” So, there was no way to profile the initial mount of an application. This is now supported with a "reload and profile" action. Component renders list: The profiler in React DevTools 4.0 displays a list of each time a selected component was rendered during a profiling session. You can use this list to quickly jump between commits when analyzing a component’s performance. You can check out the release notes of React DevTools 4.0 to know what other features have landed in this release. React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more React Native 0.60 releases with accessibility improvements, AndroidX support, and more React Native VS Xamarin: Which is the better cross-platform mobile development framework?
Read more
  • 0
  • 0
  • 3734

article-image-infernojs-v6-0-0-a-react-like-library-for-building-high-performance-user-interfaces-is-now-out
Bhagyashree R
15 Oct 2018
3 min read
Save for later

InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out

Bhagyashree R
15 Oct 2018
3 min read
Yesterday, InfernoJS community announced the release of InfernoJS v6.0.0. This version comes with Fragments using which you can group a list of children without adding extra nodes to the DOM. Three new methods have been added: createRef, forwardRef, and rerender, along with few breaking changes. Added support for Fragments Support for Fragments, a new type of VNode, is added. It will enable you to group a list of children without adding extra nodes to the DOM. With Fragments, you can return an array from Component render creating an invisible layer which ties its content together but will not render any container to actual DOM. Fragments can be created using the following four ways: Native Inferno API: createFragment(children: any, childFlags: ChildFlags, key?: string | number | null) JSX: <> ... </>, <Fragment> .... </Fragment> or <Inferno.Fragment> ... </Inferno.Fragment> createElement API: createElement(Inferno.Fragment, {key: 'test'}, ...children) Hyperscript API: h(Inferno.Fragment, {key: 'test'}, children) createRef API Refs provide a way to access DOM nodes or React elements created in the render method. You can now create refs using createRef() and attach them to React elements via the ref attribute. This new method allows you to write nicer syntax and reduces code when no callback to DOM creation is needed. forwardRef API forwardRef API allows you to “forward” ref inside a functional Component. This new method will be useful if you want to create a reference to a specific element inside simple functional Components. Forwarding ref means, automatically passing a ref through a component to one of its children. rerender With the help of the rerender method, all the pending setState calls will be flushed and rendered immediately. You can use it in the cases when the render timing is important or to simplify tests. New lifecycle Old lifecycle methods, componentWillMount, componentWillReceiveProps, or componentWillUpdate, will not be called when the new lifecycle methods, getDerivedStateFromProps or getSnapshotBeforeUpdate are used. What are the breaking changes? Since not all applications need server-side rendering, hydrate is now a part of the inferno-hydrate package. Style properties now use hyphen. For example, backgroundColor => background-color. In order to support JSX Fragment syntax, babel-plugin-inferno now depends on babel v7. setState lifecycle is changed to have better compatibility with ReactJS. Also, componentDidUpdate will now be triggered later in the lifecycle chain, after refs have been created. String refs are completely removed. Instead, you can use callback refs, createRef API, or forward Refs. Read the release notes of InfernoJS on its GitHub repository. Node.js v10.12.0 (Current) released The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!
Read more
  • 0
  • 0
  • 3734

article-image-youtube-starts-testing-av1-video-codec-format-launches-av1-beta-playlist
Melisha Dsouza
14 Sep 2018
2 min read
Save for later

YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist

Melisha Dsouza
14 Sep 2018
2 min read
YouTube has begun transcoding videos into the AV1 video codec and has created an AV1 beta launch playlist to test the same, yesterday. Why does the AV1 format matter? This transcoding aims to significantly reduce video stream bandwidths without loss in quality while exceeding the compression standards set by even HEVC. Google was opposed to using HEVC due to its high royalty costs. To combat this, the company developed its own VP9 format in 2012 for HD and 4K HDR video which saw limited uptake outside of Google’s own properties. AV1 now replaces both HEVC and VP9. The AV1 initiative was announced in 2015, where internet giants like Amazon, Apple, Google, Facebook, Microsoft, Mozilla, Netflix, and several others joined forces to develop a ‘next gen’ video format. Besides better compression as compared to VP (and HEVC), AV1 has a royalty-free license. This could lead to reducing the operating-cost savings for YouTube and other video streaming services. Since video streaming contributes to a massive chunk of total internet traffic, even a small improvement in compression can have massive effects on the network as well as on user experience. AV1 also provides an architecture for both moving and still images. More widespread support and adoption of the AV1 software is projected for 2020. Source: Flatpanelshd YouTube users of the new AV1 format will not notice a reduction in their data consumption just yet, because the first batch of videos has been encoded at a very high bitrate to test performance. Future playlists could, however, test the CODEC's other more important aspects- data savings. To watch the videos in AV1, users will have to use Chrome 70 or Firefox 63 that were both recently updated to support AV1. YouTube also mentions that AV1 videos are currently available in 480p SD only, switching to VP9 for higher resolutions. Head over to YouTube’s official site for more coverage on the news. YouTube’s Polymer redesign doesn’t like Firefox and Edge browsers YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 0
  • 3732
article-image-nuscenes-the-largest-open-source-dataset-for-self-driving-vehicles-by-scale-and-nutonomy
Bhagyashree R
17 Sep 2018
2 min read
Save for later

nuScenes: The largest open-source dataset for self-driving vehicles by Scale and nuTonomy

Bhagyashree R
17 Sep 2018
2 min read
Scale and nuTonomy, the leading players in the self-driving vehicle ecosystems, open-sourced a research dataset named nuScenes last week. According to the companies, this is the largest open source dataset for self-driving vehicles, which includes data from LIDAR, RADAR, camera, IMU, and GPS. nuTonomy with the help of Scale’s Sensor Fusion Annotation API, compiled more than 1,000 20-second clips and 1.4 million images. nuScenes comprises of 400,000 sweeps of LIDARs and 1.1 million three-dimensional boxes detected with the combination of RGB cameras, RADAR, and LIDAR. The collection of this much data was facilitated by six cameras, one LIDAR, five RADARs, GPS, and an inertial measurement sensor. They chose the driving routes in Singapore and Boston to showcase challenging locations, times, and weather conditions. This open-source dataset reportedly surpasses in terms of size and accuracy of common datasets including public KITTI dataset, Baidu ApolloScape dataset, Udacity self-driving dataset, and even the more recent Berkeley DeepDrive dataset. Making this huge dataset available to the users will facilitate training and testing different algorithms for autonomous driving, accurately and quickly. Scale CEO Alexandr Wang said: “We’re proud to provide the annotations … as the most robust open source multi-sensor self-driving dataset ever released. We believe this will be an invaluable resource for researchers developing autonomous vehicle systems, and one that will help to shape and accelerate their production for years to come.” You can read more about nuScenes in this full coverage. To know more about nuScenes check out its website and also see the official announcement by Scale on its Twitter page. Google launches a Dataset Search Engine for finding Datasets on the Internet Ethereum Blockchain dataset now available in BigQuery for smart contract analytics 25 Datasets for Deep Learning in IoT
Read more
  • 0
  • 0
  • 3726

article-image-microsoft-researchers-introduce-a-new-climate-forecasting-model-and-a-public-dataset-to-train-these-models
Natasha Mathur
08 Mar 2019
3 min read
Save for later

Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models

Natasha Mathur
08 Mar 2019
3 min read
Microsoft researcher Lester Mackey and his teammates along with grad students, Jessica Hwang, and Paulo Orenstein have come out with a new machine learning based forecasting model along with a comprehensive dataset, called SubseasonalRodeo, for training the subseasonal forecasting models. Subseasonal forecasting models are systems that are capable of predicting the temperature or precipitation 2-6 weeks in advance in the western contiguous United States. The SubseasonalRhodeo dataset can be found at the Harvard Dataverse. Researchers have presented the details about their work in the paper titled “Improving Subseasonal Forecasting in the Western U.S. with Machine Learning”. “What has perhaps prevented computer scientists and statisticians from aggressively pursuing this problem is that there hasn’t been a nice, neat, tidy dataset for someone to just download ..and use, so we hope that by releasing this dataset, other machine learning researchers.. will just run with it,” says Hwang. Microsoft team states that a large amount of high-quality historical weather data along with the existing computational power makes the process of statistical forecast modeling worthwhile. Also, clubbing together the physics-based and statistics-based approaches lead to better predictions. The team’s machine learning based forecasting system combines the two regression models that are trained on its SubseasonalRodeo dataset. The dataset consists of different weather measurements dating as far back as 1948. These weather measurements include temperature, precipitation, sea surface temperature, sea ice concentration, and relative humidity and pressure. This data is consolidated from sources like the National Center for Atmospheric Research, the National Oceanic and Atmospheric Administration’s Climate Prediction Center and the National Centers for Environmental Prediction. First of the two models created by the team is a local linear regression with multitask model selection, or MultiLLR. Data used by the team was limited to an eight-week span in any year around the day for which the prediction was being made. There was also a selection process which made use of a customized backward stepwise procedure where two to 13 of the most relevant predictors were consolidated to make a forecast. The second model created by the team was a multitask k-nearest neighbor autoregression, or AutoKNN. This model incorporates the historical data of only the measurement being predicted such as either the temperature or the precipitation. Researchers state that although each model performed better on its own as compared to the competition’s baseline models, namely, a debiased version of the operational U.S. Climate Forecasting System (CFSv2) and a damped persistence model, they deal with different parts of the challenges associated with the subseasonal forecasting. For instance, the first model created by the researchers makes use of only the recent history to make its predictions while the second model doesn’t account for other factors. So the team’s final forecasting model was a combination of the two models. The team will be further expanding its work to the Western United States and will continue its collaboration with the Bureau of Reclamation and other agencies. “I think that subseasonal forecasting is fertile ground for machine learning development, and we’ve just scratched the surface,” mentions Mackey. For more information, check out the official Microsoft blog. Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 3725

article-image-undetected-linux-backdoor-speakup-infects-linux-macos-with-cryptominers
Melisha Dsouza
05 Feb 2019
4 min read
Save for later

Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers

Melisha Dsouza
05 Feb 2019
4 min read
Security Researchers have discovered a new backdoor trojan, dubbed as ‘SpeakUp’ which exploits known vulnerabilities in six different Linux distributions and has the ability to infect MacOS. This trojan discovered by Check Point Research, is being utilised in a crypto mining campaign that has targeted more than 70,000 servers worldwide so far. Attackers have been using SpeakUp in a campaign to deploy Monero cryptocurrency miners on infected servers thus earning around 107 Monero coins (around $4,500). Last month, the backdoor was spotted for the first time and researchers discovered a built-in Python script that allowed the trojan to spread through the local network, laterally. The virus remains undetected, has complex propagation tactics, and the threat surface contains servers that run the top sites on the internet. What can this trojan do? Vulnerable systems that have been affected by this trojan allow the hackers to perform a host of  illicit activities like modification of the local cron utility to gain boot persistence, take control over shell commands, execute files downloaded from a remote command and control (C&C) server, and update or uninstall itself. According to the researchers, SpeakUp has already been spotted exploiting the Linux servers that run more than 90 percent of the top 1 million domains in the U.S. The hackers behind SpeakUp are using an exploit for the ThinkPHP framework to infect servers and the researchers have not  seen the attackers targeting anything except ThinkPHP. The trojan has been crafted with complexity and can scan local networks for open ports, use a list of pre-defined usernames and passwords to brute-force nearby systems and take over unpatched systems using one of these seven exploits: CVE-2012-0874: JBoss Enterprise Application Platform Multiple Security Bypass Vulnerabilities CVE-2010-1871: JBoss Seam Framework remote code execution JBoss AS 3/4/5/6: Remote Command Execution CVE-2017-10271: Oracle WebLogic wls-wsat Component Deserialization RCE CVE-2018-2894: Vulnerability in the Oracle WebLogic Server component of Oracle Fusion Middleware. Hadoop YARN ResourceManager - Command Execution CVE-2016-3088: Apache ActiveMQ Fileserver File Upload Remote Code Execution Vulnerability. Security researchers have also pointed out to the fact that the SpeakUp’s authors have the ability to download any code they want to the servers. “SpeakUp’s obfuscated payloads and propagation technique is beyond any doubt the work of a bigger threat in the making. It is hard to imagine anyone would build such a compound array of payloads just to deploy few miners. The threat actor behind this campaign can at any given time deploy additional payloads, potentially more intrusive and offensive. It has the ability to scan the surrounding network of an infected server and distribute the malware.” According to Threatpost, Oded Vanunu, head of products vulnerability research for Check Point, said that “the scope of this attack includes all servers running ThinkPHP, Hadoop Yarn, Oracle WebLogic, Apache ActiveMQ and Red Hat JBoss. Since these software can be deployed on virtual servers, all cloud infrastructure are also prone to be affected.” According to the analysis by Check Point Research, the malware is currently distributed to Linux servers mainly located in China. Lotem Finkelstein, one of the Check Point researchers told ZDNet that “the infections in non-Chinese countries comes from SpeakUp using its second-stage exploits to infect companies' internal networks, which resulted in the trojan spreading outside the normal geographical area of a Chinese-only PHP framework.” You can head over to Check Point Research official post for a break down of how this trojan works as well as an analysis of its impact. Git-bug: A new distributed bug tracker embedded in git Fortnite just fixed a bug that let attackers to fully access user accounts, impersonate real players and buy V-Buck 35-year-old vulnerabilities in SCP client discovered by F-Secure researcher
Read more
  • 0
  • 0
  • 3723
article-image-mozilla-internet-society-and-web-foundation-wants-g20-to-address-techlash-fuelled-by-security-and-privacy-concerns
Natasha Mathur
24 Aug 2018
4 min read
Save for later

Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns

Natasha Mathur
24 Aug 2018
4 min read
The Mozilla organization, Internet Society, and the web foundation have spoken out about the current “techlash” that is posing a strong risk to the Internet on their blogs. They want the G20 to address the issues causing techlash at the ongoing G20 Digital Economy Ministerial Meeting this week. Techlash, a term originally coined by The Economist last year, refers to a strong response against major tech companies due to concerns over power, user privacy, and security. This techlash is caused by security and privacy concerns for users on the web. As mentioned in their (Mozilla, Internet Society, web foundation) blog post, “once thought of as the global equalizer, opening doors for communication, work opportunities, commerce and more – the Internet is now increasingly viewed with skepticism and wariness. We are witnessing a trend where people are feeling let down by the technology they use”. The Internet is estimated to contribute US$6.6 trillion a year in the G20 countries by 2020. For developing nations, the rate at which digital economy is growing is 15 to 25 percent a year. Yet, the internet seems to be at continuous risk. This is largely due to the reasons like data breaches, silence around how data is utilized and monetized, cybercrime, surveillance as well as other online threats that are causing mistrust among users. The blog reads that “It is the priority of G20 to reinject hope into technological innovation: by putting people, their rights, and needs first”. With over 100 organizations calling on the leaders at the G20 Digital Economy Ministerial Meeting this week, the urgency speaks highly of how the leaders need to start putting people at “the center of the digital future”. G20 comprises of the world’s largest advanced and emerging economies. It represents, about two-thirds of the world’s population, 85% of global gross domestic product and over 75% of global trade These member nations engage with guest countries and other non-member countries to make sure that the G20 presents a broad range of international opinion. The G20 is famous for addressing issues such as connectivity, future of work and education. But, topics such as security and privacy, which are of great importance and concern to people across the globe, haven’t featured equally as prominently on discussion forums. According to the blog post, “It must be in the interest of the G20 as a global economic powerhouse to address these issues so that our digital societies can continue to thrive”. With recent data issues such as a 16-year-old hacking Apple’s “super secure” customer accounts, idle Android devices sending data to Google, and governments using surveillance tech to watch you, it is quite evident that the need of the hour is to make the internet a secure place. Other recent data breaches include Homebrew’s Github repo getting hacked in 30 minutes, TimeHop’s data breach, and AG Bob Ferguson asking Facebook to stop discriminatory ads. Companies should be held accountable for their invasive advertising techniques, manipulating user data or sharing user data without permission. People should be made aware of the ways their data is being used by the governments and the private sector. Now, there are measures being taken by organizations at an individual level to make the internet more safe for the users. For instance, DARPA is working on AI forensic tools to catch deepfakes over the web, Twitter deleted 70 million fake accounts to curb fake news, and EU fined Google with $5 billion over the Android antitrust case. But, with G20 bringing more focus to the issue, it can really help protect the development of the Internet on a global scale. G20 members should aim at protecting information of all the internet users across the world. It can play a detrimental role by taking into account people’s concerns over internet privacy and security. The techlash is ”questioning the benefits of the digital society”. Argentine President, Mauricio Macri, said that to tackle the challenges of the 21st century “put the needs of people first” and it's time for G20 to do the same. Check out the official blog post by Mozilla, Internet Society and Web Foundation. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Four 2018 Facebook patents to battle fake news and improve news feed Time for Facebook, Twitter, and other social media to take responsibility or face regulation  
Read more
  • 0
  • 0
  • 3721

article-image-apex-ai-announced-apex-os-and-apex-autonomy-for-building-failure-free-autonomous-vehicles
Sugandha Lahoti
20 Nov 2018
2 min read
Save for later

Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles

Sugandha Lahoti
20 Nov 2018
2 min read
Last week, Alphabet’s Waymo announced that they will launch the world’s first commercial self-driving cars next month. Just two days after that, Apex.AI. announced their autonomous mobility systems. This announcement came soon after they closed a $15.5MM Series A funding, led by Canaan with participation from Lightspeed. Basically, Apex. AI designed a modular software stack for building autonomous systems. It easily integrates into existing systems as well as 3rd party software. An interesting thing they claim about their system is the fact that “The software is not designed for peak performance — it’s designed to never fail. We’ve built redundancies into the system design to ensures that single failures don’t lead to system-wide failures.” Their two products are Apex.OS and Apex.Autonomy. Apex.OS Apex.OS is a meta-operating system, which is an automotive version of ROS (Robot Operating System). It allows software developers to write safe and secure applications based on ROS 2 APIs. Apex.OS is built with safety in mind. It is being certified according to the automotive functional safety standard ISO 26262 as a Safety Element out of Context (SEooC) up to ASIL D. It ensures system security through HSM support, process level security, encryption, authentication. Apex.OS improves production code quality through the elimination of all unsafe code constructs. It ships with support for automotive hardware, i.e. ECUs and automotive sensors. Moreover it comes with a complete documentation including examples, tutorials, design articles, and 24/7 customer support. Apex.Autonomy Apex.Autonomy provides developers with building blocks for autonomy. It has well-defined interfaces for easy integration with any existing autonomy stack. It is written in C++, is easy to use, and can be run and tested on Linux, Linux RT, QNX, Windows, OSX. It is designed with production and ISO 26262 certification in mind and is CPU bound on x86_64 and amd64 architectures. A variety of LiDAR sensors are already integrated and tested. Read more about the products on Apex. AI website. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race. Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles.
Read more
  • 0
  • 0
  • 3720