Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-react-native-0-59-is-now-out-with-react-hooks-updated-javascriptcore-and-more
Bhagyashree R
13 Mar 2019
2 min read
Save for later

React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!

Bhagyashree R
13 Mar 2019
2 min read
After releasing the RC0 version of React Native 0.59, the team announced its stable release yesterday. This release comes with some of the most awaited features including React Hooks, updated JavaScriptCore, and more. Support for React Hooks React Hooks were introduced to solve a wide variety of problems in React. It enables you to reuse stateful logic across components without having to restructure your components hierarchy. With React Hooks, you can split a component into smaller functions, based on what pieces are related rather than forcing a split based on lifecycle methods. It also lets you use more of React’s features without classes. Updated JavaScriptCore The JavaScriptCore (JSC) is an engine that allows Android developers to use JavaScript natively in their apps. React Native 0.59 comes with an updated JSC for Android, and hence supports a lot of modern JavaScript features. These features include 64-bit support, JavaScript support, and big performance improvements. Improved app startup time with inline requires Applications now load resources as and when required to prevent slowing down the app launch. This feature is known as “inline requires”, which delay the requiring of a module or file until that module or file is actually needed. Using inline requires can result in startup time improvements. CLI improvements Earlier, React Native CLI improvements had long-standing issues and lacked official support. The CLI tools are now moved to a new repository and come with exciting improvements. Now, logs are formatted better and commands run almost instantly. Breaking changes React Native 0.59 has been cleaned up following Google's latest recommendations, which could result in potential breakage of existing apps. You might experience a runtime crash and see a message like this, “You need to use a Theme.AppCompat theme (or descendant) with this activity." Developers are recommended to update their project’s AndroidManifest.xml file to make sure that “android:theme” value is an AppCompat theme. Also, in this release, the “react-native-git-upgrade” command has been replaced with the newly improved “react-native upgrade” command. To read the official announcement, check out React Native’s website. React Native community announce March updates, post sharing the roadmap for Q4 React Native Vs Ionic: Which one is the better mobile app development framework? How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 3817

article-image-mongodb-withdraws-controversial-server-side-public-license-from-the-open-source-initiatives-approval-process
Richard Gall
12 Mar 2019
4 min read
Save for later

MongoDB withdraws controversial Server Side Public License from the Open Source Initiative's approval process

Richard Gall
12 Mar 2019
4 min read
MongoDB's Server Side Public License was controversial when it was first announced back in October. But the team were, back then, confident that the new license met the Open Source Initiative's approval criteria. However, things seem to have changed. The news that Red Hat was dropping MongoDB over the SSPL in January was a critical blow and appears to have dented MongoDB's ambitions. Last Friday, Co-founder and CTO Eliot Horowitz announced that MongoDB had withdrawn its submission to the Open Source Initiative. Horowitz wrote on the OSI approval mailing list that "the community consensus required to support OSI approval does not currently appear to exist regarding the copyleft provision of SSPL." Put simply, the debate around MongoDB's SSPL appears to have led its leadership to reconsider its approach. Update: this article was amended 19.03.2019 to clarify that the Server Side Public License only requires commercial users (ie. X-as-a-Service products) to open source their modified code. Any other users can still modify and use MongoDB code for free. What's the purpose of MongoDB's Server Side Public License? The Server Side Public License was developed by MongoDB as a means of protecting the project from "large cloud vendors" who want to "capture all of the value but contribute nothing back to the community." Essentially the license included a key modification to section 13 of the standard GPL (General Public License) that governs most open source software available today. You can read the SSPL in full here , but this is the crucial sentence: "If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License." This would mean that users are free to review, modify, and distribute the software or redistribute modifications to the software. It's only if a user modifies or uses the source code as part of an as-a-service offering that the full service must be open sourced. So essentially, anyone is free to modify MongoDB. It's only when you offer MongoDB as a commercial service that the conditions of the SSPL state that you must open source the entire service. What issues do people have with the Server Side Public License? The logic behind the SSPL seems sound, and probably makes a lot of sense in the context of an open source landscape that's almost being bled dry. But it presents a challenge to the very concept of open source software where the idea that software should be free to use and modify - and, indeed, to profit from - is absolutely central. Moreover, even if it makes sense as a way of defending open source projects from the power of multinational tech conglomerates, it could be argued that the consequences of the license could harm smaller tech companies. As one user on Hacker News explained back in October: "Let [sic] say you are a young startup building a cool SaaS solution. E.g. A data analytics solution. If you make heavy use of MongoDB it is very possible that down the line the good folks at MongoDB come calling since 'the value of your SaaS derives primarily from MongoDB...' So at that point you have two options - buy a license from MongoDB or open source your work (which they can conveniently leverage at no cost)." The Hacker News thread is very insightful on the reasons why the license has been so controversial. Another Hacker News user, for example, described the license as "either idiotic or malevolent." Read next: We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] What next for the Server Side Public License? The license might have been defeated but Horowitz and MongoDB are still optimistic that they can find a solution. "We are big believers in the importance of open source and we intend to continue to work with these parties to either refine the SSPL or develop an alternative license that addresses this issue in a way that will be accepted by the broader FOSS community," he said. Whatever happens next, it's clear that there are some significant challenges for the open source world that will require imagination and maybe even some risk-taking to properly solve.
Read more
  • 0
  • 0
  • 17111

article-image-android-studio-3-5-canary-7-releases
Natasha Mathur
12 Mar 2019
2 min read
Save for later

Android Studio 3.5 Canary 7 releases!

Natasha Mathur
12 Mar 2019
2 min read
Android Studio team released version 3.5 Canary 7 of Android Studio, an officially integrated development environment for Google's Android operating system, yesterday. Android Studio 3.5 Canary 7 is now made available in the Canary and Dev channels. The latest release explores bug fixes for the public issues. Improvements in Android Studio 3.5 Canary 7 The illegal character '-' in module name has been fixed. The databinding annotation processor injecting an absolute path into KotlinCompile that can default Gradle's remote build cache has been fixed. Earlier it was impossible to specify more than 255 file extensions for aaptOptions noCompress, this issue has now been fixed. The issue of AAPT2 crashing in case of plurals in XML contain an apostrophe, has been fixed. The refactoring method name didn’t work and has been fixed. Layout preview used to rerender when typing in the XML editor, this has now been fixed. The issue of DDMLIB process using a full CPU core for times when there is no device/emulator connected has been fixed. Kotlin main classes used to appear on the class path before test classes while running unit tests, this has been fixed now. For more information, check out the official release notes for Android Studio 3.5 Canary 7. Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more! Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin 9 Most Important features in Android Studio 3.2
Read more
  • 0
  • 0
  • 2259

article-image-google-cloud-console-incident-resolved
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Google Cloud Console Incident Resolved!

Melisha Dsouza
12 Mar 2019
2 min read
On 11th March, Google Cloud team received a report of an issue with Google Cloud Console and Google Cloud Dataflow. Mitigation work to fix the issue was started on the same day as per Google Cloud’s official page. According to Google post, “Affected users may receive a "failed to load" error message when attempting to list resources like Compute Engine instances, billing accounts, GKE clusters, and Google Cloud Functions quotas.” As a workaround, the team suggested the use of gcloud SDK instead of the Cloud Console. No workaround was suggested for Google Cloud Dataflow. While the mitigation was underway, another update was posted by the team: “The issue is partially resolved for a majority of users. Some users would still face trouble listing project permissions from the Google Cloud Console.” The issue which began around 09:58 Pacific Time, was finally resolved around 16:30 Pacific Time on the same day. The team said that they will conduct an internal investigation of this issue and “make appropriate improvements to their systems to help prevent or minimize future recurrence. They will also provide a more detailed analysis of this incident once they have completed our internal investigation.”  There is no other information revealed as of today. This downtime affected a  majority of Google Cloud users. https://twitter.com/lukwam/status/1105174746520526848 https://twitter.com/jbkavungal/status/1105184750560571393 https://twitter.com/bpmtri/status/1105264883837239297 Head over to Google Cloud’s official page for more insights on this news. Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 2589

article-image-openai-lp-a-new-capped-profit-company-to-accelerate-agi-research-and-attract-top-ai-talent
Fatema Patrawala
12 Mar 2019
3 min read
Save for later

OpenAI LP, a new “capped-profit” company to accelerate AGI research and attract top AI talent

Fatema Patrawala
12 Mar 2019
3 min read
A move that has surprised many, OpenAI yesterday announced the creation of a new for-profit company to balance its huge expenditures into compute and AI talents. Sam Altman, the former president of Y Combinator who stepped down last week, has been named CEO of the new “capped-profit” company, OpenAI LP. But some worry that this move may result in making the innovative company no different from the other AI startups out there. With the OpenAI LP their mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world. OpenAI mentions on their blog that “returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.” Any returns beyond the cap amount will revert to OpenAI. OpenAI LP’s primary obligation is to advance the aims of the OpenAI Charter. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. But the major reason behind the new for-profit subsidiary can be explicitly put up as OpenAI in need of more money. The company anticipates to spend billions of dollars in building large-scale cloud compute, attracting and retaining talented people, and developing AI supercomputers in the coming years. The cash burn rate of a top AI research company is staggering. Consider OpenAI’s recent OpenAI Five project — a set of coordinated AI bots trained to compete against human professionals in the video game Dota 2. OpenAI rented 128,000 CPU cores and 256 GPUs at approximately US$2500 per hour for the time-consuming process of training and fine-tuning its OpenAI Five models. Additionally consider the skyrocketing cost of retaining top AI talents. A New York Times story revealed that OpenAI paid its Chief Scientist Ilya Sutskever more than US$1.9 million in 2016. The company currently employs some 100 pricey talents for developing its AI capabilities, safety, and policies. OpenAI LP will be governed by the original OpenAI Board. Only a few on the Board of Directors are allowed to hold financial stakes, and those who do not will be able to vote on decisions if the financial interests are seen to conflict with OpenAI’s mission. People have linked the new for-profit company with OpenAI’s recent controversial decision to withhold the code and training dataset for their language model GPT-2, ostensibly due concerns they might be used for malicious purposes such as generating fake news. A tweet from a software engineer suggested an ulterior motive: “I now see why you didn’t release the fully trained model of #gpt2”. OpenAI Chairman and CTO Greg Brockman shot back: “Nope. We aren’t going to commercialize GPT-2.” OpenAI aims to forge a sustainable path towards long-term AI development. And it also plans to strike a balance between benefiting humanity and turning a profit. A big part of OpenAI’s appeal to top AI talents is it's not-for-profit character — will OpenAI LP mar that? And can OpenAI really strike a balance between benefiting humanity and turning a profit? Whether the for-profit shift will accelerate OpenAI’s mission or prove a detrimental detour remains to be seen, but the journey ahead is bound to be challenging. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words  
Read more
  • 0
  • 0
  • 2664

article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3118
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-sway-1-0-released-with-swaynag-improved-performance-major-bug-fixes-and-more
Amrata Joshi
12 Mar 2019
3 min read
Save for later

Sway 1.0 released with swaynag, improved performance, major bug fixes and more!

Amrata Joshi
12 Mar 2019
3 min read
Yesterday, the team at Sway, the i3-compatible Wayland compositor released Sway 1.0, the first stable release of sway which is a consistent, flexible, and powerful desktop environment for Linux and FreeBSD. Sway 1.0 comes with a variety of features that improves performance and offers a better implementation of Wayland. This release is 100% compatible with i3, i3 IPC, i3-gaps and i3bar. What’s new in Sway 1.0? In this release, swayidle, a daemon for managing DPMS and idle activity has been added. This release comes with swaynag, an i3-nagbar replacement. With this release, the bindsym locked now add keybindings which work when the screen is locked. In this release, the command blocks are now generic and they work with any command. It is now possible to adjust the Window opacity with the opacity command. With this release, the border csd enables client-side decorations. Sawy 1.0 comes with atomic layout updates that help in resizing windows and adjusting the layout. With this release, the urgency hints from Xwayland are also supported. The Output damage tracking in this release will help in improving CPU performance and power usage. The performance will be improved with Hardware cursors. In this release, Wayland, x11, and headless backends are now supported for end-users. Major changes This release will now depend on wlroots 0.5. This release has dropped the dependency on asciidoc. With Sawy 1.0, the experimental Nvidia support has been removed. With this release, the swaylock is now distributed separately. Major Bugs fixes Issues related to xdg-shell have been fixed. Issues related to Xwayland have been fixed. Reloading config doesn't cause crashes anymore. Few users are excited about this news. One of the users commented on HackerNews, “Sway is absolutely incredible, it puts macOS, built by Apple's army of engineers and dump trucks of money to shame in its simplicity, stability, and efficiency.” Few others are unhappy because of the tiling window manager. Another user commented, “I really don't get the benefit of a tiling window manager. I tried one and instantly felt boxed in. There's not enough room on the screen for everything I need to have opened and flip between, which is why I use an overlapping window manager in the first place.” To know more about this news, check out the official announcement. Sway 1.0 beta.1 released with the addition of third-party panels, auto-locking, and more Alphabet’s Chronicle launches ‘Backstory’ for business network security management ‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth  
Read more
  • 0
  • 0
  • 1890

article-image-f5-networks-is-acquiring-nginx-a-popular-web-server-software-for-670-million
Bhagyashree R
12 Mar 2019
3 min read
Save for later

F5 Networks is acquiring NGINX, a popular web server software for $670 million

Bhagyashree R
12 Mar 2019
3 min read
Yesterday, F5 Networks, the company that offers businesses cloud and security application services, announced that it is set to acquire NGNIX, the company behind the popular open-source web server software, for approximately $670 million. These two companies are coming together to provide their customers with consistent application services across every environment. F5 has been seeing some stall in its growth lately given that its last quarterly earnings have only shown a 4% growth compared to the year before. On the other hand, NGINX continues to show a 100 percent year-on-year growth since 2014. The company currently boasts of 375 million users with about 1,500 customers for its paid services like support, load balancing, and API gateway and analytics. This acquisition will enable F5 to accelerate  ‘time to market’ of its services to customers for building modern applications. F5 plans to enhance the current offerings by NGINX using its security solutions and will also be integrating its cloud-native innovations with NGINX’s load balancing technology. Along with these advancements, F5 will help scale NGINX selling opportunities using its global sales force, channel infrastructure, and partner ecosystem. François Locoh-Donou, President and CEO of F5, sharing his vision behind acquiring NGINX said, “F5’s acquisition of NGINX strengthens our growth trajectory by accelerating our software and multi-cloud transformation”. He adds, “By bringing F5’s world-class application security and rich application services portfolio for improving performance, availability, and management together with NGINX’s leading software application delivery and API management solutions, unparalleled credibility and brand recognition in the DevOps community, and massive open source user base, we bridge the divide between NetOps and DevOps with consistent application services across an enterprise’s multi-cloud environment.” NGINX’s open source community was also a major factor behind this acquisition. F5 will continue investing in the NGINX open source project as open source is a core part of its multi-cloud strategy. F5 expects that this will help it accelerate product integrations with leading open source projects and open doors for more partnership opportunities. Gus Robertson, CEO of NGINX, Inc, said, “NGINX and F5 share the same mission and vision. We both believe applications are at the heart of driving digital transformation. And we both believe that an end-to-end application infrastructure—one that spans from code to customer—is needed to deliver apps across a multi-cloud environment.” The acquisition is now approved by the boards of directors of both F5 and NGINX and is expected to close in the second calendar quarter of 2019. Once the acquisition is complete, the NGINX founders, Gus Robertson, Igor Sysgoev, and Maxim Konovalov will be joining F5 Networks. To know more in detail, check out the announcement by F5 Networks. Now you can run nginx on Wasmjit on all POSIX systems Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack  
Read more
  • 0
  • 0
  • 2710

article-image-fedora-31-will-now-come-with-mono-5-to-offer-open-source-net-support
Amrata Joshi
12 Mar 2019
2 min read
Save for later

Fedora 31 will now come with Mono 5 to offer open-source .NET support

Amrata Joshi
12 Mar 2019
2 min read
Fedora has always been shipping Mono 4.8, the open source development platform for building cross-platform applications, with each Fedora release. Even after shipping Mono 5.0 in May 2017, the company still continued with Mono 4.8. But it seems the idea will be changing now with the release of Fedora 31. With Fedora 31, the team at Fedora is finally planning to switch to Mono 5.20 which is expected to release later this year. An effort was made in the past few months by the Fedora team to build Mono from source. The build was also done for Debian using msc instead of csc and the reference assemblies were rebuilt from source. In case of Mono, it requires itself to build. The Mono version 4.8 which is included in Fedora currently, is too old to build version 5.20. Currently, the team has been using monolite and a little version of mono compiler, .NET 4.7.1 reference assemblies for first build time. The sources for the required patch files are maintained on Github. The transition from Mono 4 to Mono 5 was on halt because of the changes required in their compiler stack and its dependency upon some binary references. These binaries are available as a source but treated as pre-compiled binaries for simplification and speed. The Fedora developers are now working towards getting Mono 5 into Fedora 31. This will also let the cross-platform applications that are relying upon Microsoft's .NET framework 4.7 and later to now work. Mono 4.8 is also not compatible for PowerPC 64-bit but it is expected that Mono 5 will be. To know more about this news, check out the change proposal. Fedora 29 released with Modularity, Silverblue, and more Swift is now available on Fedora 28 Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes
Read more
  • 0
  • 0
  • 2915

article-image-the-linux-foundation-announces-the-chips-alliance-project-for-deeper-open-source-hardware-integration
Sugandha Lahoti
12 Mar 2019
2 min read
Save for later

The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration

Sugandha Lahoti
12 Mar 2019
2 min read
In order to advance open source hardware, the Linux Foundation announced a new CHIPS Alliance project yesterday. Backed by Esperanto, Google, SiFive, and Western Digital, the CHIPS Alliance project “will foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and IoT applications.” The project will help in making open source CPU chip and system-on-a-chip (SoC) design more accessible to the market, by creating an independent entity where companies and individuals can collaborate and contribute resources. It will provide the chip community with access to high-quality, enterprise-grade hardware. This project will include a Board of Directors, a Technical Steering Committee, and community contributors who will work collectively to manage the project. To initiate the process, Google will contribute a Universal Verification Methodology (UVM)-based instruction stream generator environment for RISC-V cores. The environment provides configurable, highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs. SiFive will improve the RocketChip SoC generator and the TileLink interconnect fabric in opensource as a member of the CHIPS Alliance. They will also contribute to Chisel (a new opensource hardware description language), and the FIRRTL intermediate representation specification. SiFive will also maintain Diplomacy, the SoC parameter negotiation framework. Western Digital, another contributor will provide high performance, 9-stage, dual issue, 32-bit SweRV Core, together with a test bench, and high-performance SweRV instruction set simulator. They will also contribute implementations of OmniXtend cache coherence protocol. Looking ahead Dr. Yunsup Lee, co-founder, and CTO, SiFive said in a statement “A healthy, vibrant semiconductor industry needs a significant number of design starts, and the CHIPS Alliance will fill this need.” More information is available at CHIPS Alliance org. Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’.
Read more
  • 0
  • 0
  • 3211
article-image-announcing-dtrace-for-windows-insider
Melisha Dsouza
12 Mar 2019
2 min read
Save for later

Announcing DTrace for Windows Insider

Melisha Dsouza
12 Mar 2019
2 min read
Microsoft announced on its blog today that the company has added support for DTrace into its Insider builds. The forthcoming Windows 10 feature update will bring support for this debugging and diagnostic tracing tool. The support for DTrace is now possible due to a port of the open-source OpenDTrace project. The port was announced at the Ignite conference last year. The instructions, binaries, and source code for the same are now available for Windows Insider. DTrace lets developers and administrators track kernel function calls, examine properties of running processes, and probe drivers. The DTrace scripting language allows users to specify which information is probed, and how to report that information. Hari Pulapaka, Microsoft group program manager for Windows kernel, says that the merge will happen over the next few months, but in the meantime, Microsoft is making its DTrace source available. Source: Microsoft blog To run DTrace on Windows 10, users need a 64-bit Insider build 18342 or higher, and a valid Insider account. DTrace has to be run in administrator mode. In order to expose the required functionality for DTrace, Microsoft created a new kernel extension driver, traceext.sys. However, Microsoft does not plan to open source Traceext . You can head over to GitHub to download the source code for this project. Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more  
Read more
  • 0
  • 0
  • 2335

article-image-are-debian-and-docker-slowly-losing-popularity
Savia Lobo
12 Mar 2019
5 min read
Save for later

Are Debian and Docker slowly losing popularity?

Savia Lobo
12 Mar 2019
5 min read
Michael Stapelbergs, in his blog, stated why he has planned to reduce his involvement towards Debian software distribution. Stapelbergs is the one who wrote the Linux tiling window manager i3, the code search engine Debian Code Search and the netsplit-free. He said, he’ll reduce his involvement in Debian by, transitioning packages to be team-maintained remove the Uploaders field on packages with other maintainers orphan packages where he is the sole maintainer Stapelbergs mentions the pain points in Debian and why he decided to move away from it. Change process in Debian Debian follows a different change process where packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian. This tool is not necessarily important. “currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages”, Stapelbergs writes. “Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.” Fragmented workflow and infrastructure Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Practically, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Stapelbergs said that after he noticed the workflow fragmentation in the Go packaging team, he also tried fixing this with the workflow changes proposal, but did not succeed in implementing it. Debian is hard to machine-read “While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome.” debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database. There used to be a fedmsg instance for Debian, but it no longer seems to exist. “It is unclear where to get notifications from for new packages, and where best to fetch those packages”, Stapelbergs says. A user on HackerNews said, “I've been willing to package a few of my open-source projects as well for almost a year, and out of frustration, I've ended up building my .deb packages manually and hosting them on my own apt repository. In the meantime, I've published a few packages on PPAs (for Ubuntu) and on AUR (for ArchLinux), and it's been as easy as it could have been.” Check out what the entire blogpost by Stapelbergs. Maish Saidel-Keesing believes Docker will die soon Maish Saidel-Keesing, a Cloud & AWS Solutions Architect at CyberArk, Israel, in his blog post mentions, “the days for Docker as a company are numbered and maybe also a technology as well” https://twitter.com/maishsk/status/1019115484673970176 Docker has undoubtedly brought in the popular containerization technology. However, Saidel-Keesing says, “Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.” He also talks about how Open Container Initiative brought with it the Runtime Spec, which opened the door to use something else besides docker as the runtime. Docker is no longer the only runtime that is being used. “Kelsey Hightower - has updated his Kubernetes the hard way over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing”, Saidel-Keesing says. “What triggered me was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools” https://twitter.com/maishsk/status/1098295411117309952 Saidel-Keesing writes, “Lo and behold - no more docker package available in RHEL 8”. He further added, “If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package: podman-docker.noarch : "package to Emulate Docker CLI using podman." To know more on this news, head over to Maish Saidel-Keesing’s blog post. Docker Store and Docker Cloud are now part of Docker Hub Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster!
Read more
  • 0
  • 0
  • 5811

article-image-researchers-input-rabbit-duck-illusion-to-google-cloud-vision-api-and-conclude-it-shows-orientation-bias
Bhagyashree R
11 Mar 2019
3 min read
Save for later

Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias

Bhagyashree R
11 Mar 2019
3 min read
When last week, Janelle Shane, a Research Scientist in optics, fed the infamous rabbit-duck illusion example to the Google Cloud Vision API, it gave “rabbit” as a result. However, when the image was rotated at a different angle, the Google Cloud Vision API predicted a “duck”. https://twitter.com/JanelleCShane/status/1103420287519866880 Inspired by this, Max Woolf, a data scientist at Buzzfeed, further tested and concluded that the result really varies based on the orientation of the image: https://twitter.com/minimaxir/status/1103676561809539072 Google Cloud Vision provides pretrained API models that allow you to derive insights from input images. The API classifies images into thousands of categories, detects individual objects and faces within images, and reads printed words within images. You can also train custom vision models with AutoML Vision Beta. Woolf used Python for rotating the image and get predictions from the API for each rotation. He built the animations with R, ggplot2, and gganimate. To render these animations he used ffmpeg. Many times, in deep learning, a model is trained using a strategy in which the input images are rotated to help the model better generalize. Seeing the results of the experiment, Woolf concluded, “I suppose the dataset for the Vision API didn't do that as much / there may be an orientation bias of ducks/rabbits in the training datasets.” The reaction to this experiment was pretty torn. While many Reddit users felt that there might be an orientation bias in the model, others felt that as the image is ambiguous there is no “right answer” and hence there is no problem with the model. One of the Redditor said, “I think this shows how poorly many neural networks are at handling ambiguity.” Another Redditor commented, “This has nothing to do with a shortcoming of deep learning, failure to generalize, or something not being in the training set. It's an optical illusion drawing meant to be visually ambiguous. Big surprise, it's visually ambiguous to computer vision as well. There's not 'correct' answer, it's both a duck and a rabbit, that's how it was drawn. The fact that the Cloud vision API can see both is actually a strength, not a shortcoming.” Woolf has open-sourced the code used to generate this visualization on his GitHub page, which also includes a CSV of the prediction results at every rotation. In case you are more curious, you can test the Cloud Vision API with the drag-and-drop UI provided by Google. Google Cloud security launches three new services for better threat detection and protection in enterprises Generating automated image captions using NLP and computer vision [Tutorial] Google Cloud Firestore, the serverless, NoSQL document database, is now generally available  
Read more
  • 0
  • 0
  • 5211
article-image-elizabeth-warren-wants-to-break-up-tech-giants-like-amazon-google-facebook-and-apple-and-build-strong-antitrust-laws
Sugandha Lahoti
11 Mar 2019
4 min read
Save for later

Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws

Sugandha Lahoti
11 Mar 2019
4 min read
Update: Facebook has removed and then restored several ads placed by Elizabeth Warren, that called for the breakup of Facebook and other tech giants. More details here. Last Friday, 2020 presidential hopeful, Senator Elizabeth Warren pinned a medium post stating that if elected president in 2020, her administration will make big, structural changes to the tech sector to promote more competition, “ including breaking up Amazon, Facebook, and Google.” She asserted on the same statement in a campaign rally held in Long Island City, Queens, on Friday. Her judgment - she wants to set up a government that makes sure all tech companies abide by rules and the next-generation American tech companies flourish. She wants to stop bigger tech firms from abusing their reach and presence to shape laws in their favor or buy every potential competitor. Warren highlights two strategies that Amazon, Facebook, Google, and Apple undertake to achieve a level of dominance. First, they use mergers to limit competition, which government regulators also allow instead of blocking them for their negative long-term effects on competition and innovation. Second, these companies also use proprietary marketplaces to limit competition, which can lead to a conflict of interest that undermines competition. For instance, Warren says, “Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version.” Designate companies as ‘Platform Utilities’ Warren’s proposal includes a plan to pass legislation, designating platforms with more than $25 billion in revenue as “platform utilities.” Platform utilities will be companies that offer to the public an online marketplace, an exchange, or a platform for connecting third parties. Warren says that “these companies would be prohibited from owning both the platform utility and any participants on that platform. They would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users and will not be able to transfer or share data with third parties.” Amazon Marketplace, Google’s ad exchange, and Google Search would be platform utilities under this law. These new requirements would be enforced and monitored by federal regulators, State Attorneys General, or injured private parties. A company found to violate these requirements would also have to pay a fine of 5 percent of annual revenue. Appoint new regulators to break up mergers Warren also says that in her administration she will appoint new federal regulators that would be responsible for breaking up any mergers that reduce competition and allow next-generation tech companies to flourish in the markets. These would include breaking Amazon’s merger of  Whole Foods and Zappos; Facebook’s WhatsApp and Instagram and Google’s Waze, Nest and DoubleClick. She adds, “unwinding these mergers will promote healthy competition in the market — which will put pressure on big tech companies to be more responsive to user concerns, including about privacy.” What does this mean? Her main aim with this initiative is to allow small scale companies to have a fair chance to compete in the market without being overthrown by big tech firms. Here’s what Twitterati had to say (mostly supportive). https://twitter.com/ZephyrTeachout/status/1104560119868723206 https://twitter.com/BatDaddyOfThree/status/1104138757110820866 https://twitter.com/dhh/status/1104076219534979072 https://twitter.com/maxwellstrachan/status/1104051512601382913 This antitrust proposal is indeed a good way to attract the attention of voters, however, it is yet to see how well it will be effective considering Facebook, Amazon, Google, have experienced several controversies in recent years, and almost passed by without having much impact on their user base. Nevertheless, Warren’s plan is by far one of the biggest tech regulation plan proposed so far in the 2020 presidential cycle.  If nothing, it is at least going to spark a major debate about antitrust policy among both Democrats and Republicans. UK lawmakers publish a report after 18-month long investigation condemning Facebook’s disinformation and fake news practices. Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards. Experts respond to Trump’s move on signing an executive order to establish the American AI initiative.
Read more
  • 0
  • 0
  • 2731

article-image-google-updates-handwriting-recognition-feature-in-gboard-ai-now-makes-40-fewer-mistakes
Natasha Mathur
11 Mar 2019
3 min read
Save for later

Google updates the AI handwriting Recognition feature in GboarD

Natasha Mathur
11 Mar 2019
3 min read
Google announced last week that it has improved the handwriting recognition feature in Gboard, Google’s popular keyboard for mobile devices, as it is quite fast and makes 20%-40% fewer mistakes than before. It was last year when Google added support for handwriting recognition in Gboard for Android that supported more than 100 languages. Also, advancements in Machine Learning allowed Google to come out with new model architectures and training methodologies. Google made changes to its initial approach that relied on hand-designed heuristics to build a single machine learning model. This machine learning model operates on the whole input and reduces error rates significantly as compared to the old version. Google also published a paper titled “Fast Multi-language LSTM-based Online Handwriting Recognition” explaining its research regarding online handwriting recognition. Google team states that since Gboard is used on a range of devices and screen resolutions, their first measure involves normalizing the touch-point coordinates. Then, the team converts the sequence of points into a sequence of cubic Bézier curves, which are then further used as inputs to a recurrent neural network (RNN). This RNN is trained to accurately identify the character being written. Bézier curves provide a consistent representation of the input across devices consisting of different sampling rates and accuracies. Another benefit is that the sequence of Bézier curves is way more compact than the underlying sequence of input points. This makes it easier for the model to pick up temporal dependencies along the input. Now, although the sequence of curves represents the input, there is still a need for the researchers to translate the sequence of input curves into the actual written characters. Hence, a multi-layer RNN is used in order to process the sequence of curves and produce an output decoding matrix. Researchers settled on using a bidirectional version of Quasi-recurrent neural networks (QRNN). QRNNs alternate between convolutional and recurrent layers, and offers good predictive performance. Additionally, in order to "decode" the curves, RNN produces a matrix, where each column corresponds to one input curve, and each row corresponds to a letter in the alphabet The QRNN-based recognizer converts the curves’ sequence into character sequence probabilities of the same length. Also, to offer the best user-experience, accurate recognition models are not enough. This is why researchers have converted their recognition models (trained in TensorFlow) to TensorFlow Lite models. “We will continue to push the envelope beyond improving the Latin-script language recognizers. The Handwriting Team is already hard at work launching new models for all our supported handwriting languages in Gboard”, states the Google team. For more information, check out the official Google AI blog. Google Cloud security launches three new services for better threat detection and protection in enterprises Google releases a fix for the zero day vulnerability in its Chrome browser while it was under active attack Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training
Read more
  • 0
  • 0
  • 1859