Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-stanford-university-launches-institute-of-human-centered-artificial-intelligence-receives-public-backlash-for-non-representative-faculty-makeup
Fatema Patrawala
22 Mar 2019
5 min read
Save for later

Stanford University launches Institute of Human Centered Artificial Intelligence; receives public backlash for non-representative faculty makeup

Fatema Patrawala
22 Mar 2019
5 min read
On Monday, Stanford University launched the new Institute for Human-Centered Artificial Intelligence (HAI) to augment humanity with AI. The institute aims to study, guide and develop human-centered artificial intelligence technologies and applications and advance the goal of a better future for humanity through AI, according to the announcement. Its co-leaders are John Etchemendy professor of philosophy and a former Stanford University provost, and Fei-Fei Li, who is a computer science professor and a former Chief Scientist for Google Cloud AI and ML. “So much of the discussion about AI is focused narrowly around engineering and algorithms... We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.” Li explains in a blog post. The institute was launched at a symposium on campus, and it will include faculty members from all seven schools at Stanford — including the School of Medicine — and will work closely with companies in a variety of sectors, including health care, and with organizations such as AI4All. "Its biggest role will be to reach out to the global AI community, including universities, companies, governments and civil society to help forecast and address issues that arise as this technology is rolled out," said Etchemendy, in the announcement. "We do not believe we have answers to the many difficult questions raised by AI, but we are committed to convening the key stakeholders in an informed, fact-based quest to find those answers." The symposium featured a star-studded speaker lineup that included industry titans Bill Gates, Reid Hoffman, Demis Hassabis, and Jeff Dean, as well as dozens of professors in fields as diverse as philosophy and neuroscience. Even California Governor, Gavin Newsom made an appearance, giving the final keynote speech. As the audience of the event included former Secretaries of State Henry Kissinger and George Shultz, former Yahoo CEO Marissa Mayer, and Instagram Co-founder Mike Krieger. Any AI initiative that government, academia, and industry all jointly support is good news for the future of the tech field. HAI differs from many other AI efforts in that its goal is not to create AI rivaling humans in intelligence, but rather to find ways where AI can augment human capabilities and enhance human productivity and quality of life. If you missed the event, you can view a video recording here. Institute aims to become a representative of humanity but ends up being claimed as exclusionary While the Institute’s mission stated “The creators and designers of AI must be broadly representative of humanity.” It has been noticed that the institute has 121 faculty members listed on their website, and not a single member of Stanford’s new AI faculty is black. https://twitter.com/chadloder/status/1108588849503109120 There were questions as to why so many of the most influential people in the Valley decided to align with this center and publicly support it, and why this center aims to raise $1 billion to further its efforts. What does this center offer such a powerful group of people? https://twitter.com/annaeveryday/status/1108594937145114625 The moment such comments were made on Twitter the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy. Bidadanure was not listed among the institute’s staff prior, according to a version of the page preserved on the Internet Archive’s Wayback Machine, and Juliana also spoke at the institute’s opening event. It is imperative to say that we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women. We closely know about Google and Facebook’s algorithms deciding on what information we see and which conspiracy theory YouTube serves up next. But the algorithms making those decisions are closely guarded company secrets with global impact. In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway? When a group of mostly white engineers gets together to build these systems, the impact on marginalized groups is particularly stark. Algorithms can reinforce racism in domains like housing and policing. Recently Facebook announced that the platform has removed targeting ads related to protected classes such as race, ethnicity, sexual orientation, and religion. Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets its trained on. Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us. The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got miles to go. Facebook and Microsoft announce Open Rack V3 to address the power demands from artificial intelligence and networking So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request
Read more
  • 0
  • 0
  • 3786

article-image-seaborn-v0-9-0-brings-better-data-visualization-with-new-relational-plots-theme-updates-and-more
Sugandha Lahoti
24 Jul 2018
3 min read
Save for later

Seaborn v0.9.0 brings better data visualization with new relational plots, theme updates, and more

Sugandha Lahoti
24 Jul 2018
3 min read
Seaborn, the popular data visualization library, has become a very timely and relevant tool for data professionals seeking to enhance their data visualizations. The team behind Seaborn realizes this and hence have pushed the release of Seaborn v0.9.0. This version is a major release with several substantial features and notable API name changes for better consistency with matplotlib 2.0. Three new relational plots Seaborn v0.9.0 features three new plotting functions relplot(), scatterplot(), and lineplot(). These functions bring the high-level API of categorical plotting functions to more general plots. They can visualize a relationship between two numeric variables and map up to three additional variables by modifying hue, size, and style semantics. replot() is a figure-level interface to the two plotting functions and combines them with a FacetGrid. The lineplot() function has support for statistical estimation and is replacing the older tsplot function. It is also better aligned with the API of the rest of the library and more flexible in showing relationships across additional variables. For a detailed explanation of these functions with examples of the various options, go through the API reference and the relational plot tutorial. Notable API name changes Seaborn has renamed a few functions and made changes to their default parameters. The factorplot function has been renamed to catplot(). The catplot() function shows the relationship between a numerical and (one or more) categorical variable using one of several visual representations. This change is expected to make catplot() easy to discover and to define its role better. The lvplot function has been renamed to boxenplot(). The new name makes the plot more discoverable by describing its format (it plots multiple boxes, also known as “boxen”). The size parameter to height is renamed in multi-plot grid objects (FacetGrid, PairGrid, and JointGrid) along with functions that use them (factorplot, lmplot(), pairplot(), and jointplot()). This is done to avoid conflicts with the size parameter that is used in scatterplot and lineplot functions and also makes the meaning of the parameter a bit clearer. The default diagonal plots in pairplot() are changed to now use func:kdeplot` when a "hue" dimension is used. Also, the statistical annotation component of JointGrid is deprecated. Themes and palettes updates Several changes have been made to the seaborn style themes, context scaling, and color palettes to make them more consistent with the style updates in matplotlib 2.0. Here are some of the changes: Some axes style()/plotting context() parameters have been reorganized and updated to take advantage of improvements in the matplotlib 2.0 update. The seaborn palettes (“deep”, “muted”, “colorblind”, etc.) are updated to correspond with the new 10-color matplotlib default. A few individual colors have also been tweaked for better consistency, aesthetics, and accessibility. The base font sizes in plotting context() and scaling factors for "talk" and "poster" contexts have been slightly increased. Calling set() will now call set color codes() to re-assign the single letter color codes by default. Apart from that, the introduction to the library in the documentation has been rewritten to provide more information and critical examples. These are just a select few major updates. For a full list of features, upgrades, and improvements, read the changelog. What is Seaborn and why should you use it for data visualization? Visualizing univariate distribution in Seaborn 8 ways to improve your data visualizations
Read more
  • 0
  • 0
  • 3785

article-image-introducing-saltstack-protect-a-new-secops-solution-for-automated-discovery-and-remediation-of-security-vulnerabilities
Fatema Patrawala
21 Nov 2019
3 min read
Save for later

Introducing SaltStack Protect, a new SecOps solution for automated discovery and remediation of security vulnerabilities

Fatema Patrawala
21 Nov 2019
3 min read
On Tuesday, SaltStack, the creators of intelligent automation for IT operations and security teams, announced the general availability of SaltStack Protect. SaltStack Protect is for automated discovery and remediation of security vulnerabilities across web-scale infrastructure. It is a new product available in the SaltStack SecOps family of products and is an addition to SaltStack Comply. SaltStack Comply automates the work of continuous compliance and has been updated with new CIS Benchmark content and a new SDK for the creation of custom security checks. The SaltStack SecOps products provides a collaborative platform for both security and IT operations teams to help customers break down organizational silos, offset security and IT skills gaps and talent shortages. “The massive amount of coordination and work required to actually fix thousands of infrastructure security vulnerabilities as quickly as possible is daunting. Vulnerability assessment and management tools require integrated and automated remediation to close the loop on IT security. SaltStack Protect gives security operations teams the power to control, optimize, and secure the entirety of their IT infrastructure while helping teams collaborate to mitigate risk.” said Marc Chenn, SaltStack CEO. Key features in SaltStack Protect As per the team, SaltStack Protect automates the remediation of vulnerabilities by delivering closed-loop workflows to scan, detect, prioritize, and fix critical security threats. Other capabilities include: Native CVE scanning – SaltStack Protect scans for both on-premise and cloud systems to detect threats based on more than 12,000 CVEs across operating systems and infrastructure. Intelligent vulnerability prioritization – To assess and prioritize threats for remediation, SaltStack collects real-time data on the configuration state of every asset in an environment and combines it with vulnerability information from SaltStack Protect to accurately differentiate vulnerabilities that are exploitable from those that are not. Automated remediation – SaltStack Protect brings the power of automation to SecOps teams with an API-first solution that scans IT systems for vulnerabilities and then provides out-of-the-box automation workflows to remediate them. As per the company, SaltStack SecOps products are built on SaltStack enterprise delivering a single platform for frictionless collaboration between security and IT teams. This resulted in users having a 95% decrease in the time required to find and fix critical vulnerabilities. While traditional security scanning tools report vulnerabilities that operations teams must investigate, prioritize, test, fix, and then report back to security. SaltStack eliminates nearly all the manual steps associated with vulnerability remediation, potentially saving time, resources, and redundant tools to protect against critical vulnerabilities. SaltStack is used by many IT operations, DevOps and site reliability engineering organizations around the world such as IBM Cloud, eBay, and TD Bank. If you are interested to know more about this news, check out their official blog post. Additionally SaltStack Comply and SaltStack Protect are also available via subscription and you can schedule a trial demo too. DevSecOps and the shift left in security: how Semmle is supporting software developers [Podcast] Why do IT teams need to transition from DevOps to DevSecOps? 5 reasons poor communication can sink DevSecOps 2019 Deloitte tech trends predictions: AI-fueled firms, NoOps, DevSecOps, intelligent interfaces, and more Can DevOps promote empathy in software engineering?
Read more
  • 0
  • 0
  • 3784

article-image-rails-6-will-be-shipping-source-maps-by-default-in-production
Amrata Joshi
30 Jan 2019
3 min read
Save for later

Rails 6 will be shipping source maps by default in production

Amrata Joshi
30 Jan 2019
3 min read
The developer community surely owes respect to the innovation of ‘View Source’ as it had made things much easier for the coders. Well, David Heinemeier Hansson, the developer of Ruby on Rails have made a move to make programmers’ life easy by announcing that Rails 6 will be shipping source maps by default in production. Source maps help developers view code as it was written by the creator with comments, understandable variable names, and all the other help that makes it possible for programmers to understand the code. It is sent to users over the wire when users have the dev tools open in their browser. Source maps, so far, have been seen merely as a local development tool and not something that will be shipped to production. Live debugging would make things easier for the developers. According to the post by David Heinemeier Hansson, all the JavaScript that runs Basecamp 3 under Webpack now has source maps. David Heinemeier Hansson said, “We’re still looking into what it’ll take to get source maps for the parts that were written for the asset pipeline using Sprockets, but all our Stimulus controllers are compiled and bundled using Webpack, and now they’re easy to read and learn from.” David Heinemeier Hansson is also a partner at the web-based software development firm Basecamp. He said that 90% of all the code that runs Basecamp, is open source in the form of Ruby on Rails, Turbolinks, Stimulus. He further added, “I like to think of Basecamp as a teaching hospital. The care of our users is our first priority, but it’s not the only one. We also take care of the staff running the place, and we try to teach and spread everything we learn. Pledging to protect View Source fits right in with that.” Sam Saffron, the co-founder at Discourse said, “I just wanted to voice my support for bringing this back by @dhh . We have been using source maps at Discourse now for 4 or so years, including maps for both JS and SCSS in production, default on.” According to him one of the important reasons to enable source maps in production is that often JS frameworks have "production" and "development" modes. Sam Saffron said, “I have seen many cases over the years where a particular issue only happens in production and does not happen in development. Being able to debug properly in production is a huge life saver. Source maps are not the panacea as they still have some limitations around local var unmangling and other edge cases, but they are 100 times better than working through obfuscated minified code with magic formatting enabled.” According to Sam, there is one performance concern that is the cost of precompilation. The cost was minimal at Discourse but the cost for a large number of source maps is unpredictable. Users had discussed this issue on the GitHub thread, two years ago. According to most of them the precompile build times will be reduced. A user commented on Github, “well-generated source maps can actually make it very easy to rip off someone else's source.” Another comment reads, “Source maps are super useful for error reporting, as well as for analyzing bundle size from dependencies. Whether one chooses to deploy them or not is their choice, but producing them is useful.” Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing GitHub addresses technical debt, now runs on Rails 5.2.1 Introducing Web Application Development in Rails
Read more
  • 0
  • 0
  • 3784

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783

article-image-openai-five-loses-against-humans-in-dota-2-at-the-international-2018
Amey Varangaonkar
27 Aug 2018
3 min read
Save for later

OpenAI Five loses against humans in Dota 2 at The International 2018

Amey Varangaonkar
27 Aug 2018
3 min read
Looks like OpenAI’s intelligent game-playing bots need to get a little more street smart before they can beat the world’s best. Played as a promotional side-event in The International - the annual Dota 2 tournament, OpenAI Five were beaten by a team of top human professional players in the first two games of the Best of Three contest. Both games were intense and lasted for approximately an hour, but the human teams emerged victorious quite comfortably. OpenAI Five, as we know, are 5 artificially intelligent bots developed by OpenAI, a research institute co-founded by Tesla CEO, Elon Musk, to develop and research human-level artificial intelligence. These bots are trained specifically to play Dota 2 game against top human professionals. While OpenAI five racked up more kills in the game than the human teams paiN Gaming and Big God, it lacked a cohesive strategy and wasted many opportunities to gather and utilize resources in the game efficiently, which is often the difference between a win and a loss. This loss highlights the fact that while the bots are on the right track, more improvement is needed in the manner they adjust to their surroundings and make tactical decisions on the go. Researcher at the University of Falmouth, UK, Mike Cook, agrees - his criticism being that the bots lacked decision-making at the macro-level while having their own moments of magic in the game. [embed]https://twitter.com/mtrc/status/1032430538039148544[/embed] Greg Brockman, CTO and co-founder of OpenAI, meanwhile, was not worried about this loss, citing that it is the defeats that will make OpenAI Five better and more efficient. He was of the opinion that the AI was designed to learn and adapt by learning from the experiences first, before being able to beat the human players. According to Greg, the OpenAI Five is very much still a work in progress project. [embed]https://twitter.com/gdb/status/1032830230103244800[/embed] The researchers at OpenAI are hopeful that the OpenAI Five will improve from this valuable learning experience and give a much tougher fight in the next edition of the tournament, since there won’t be a third game this year. As things stand, though, it’s pretty clear that the human players aren’t going to be replaced by the AI bots anytime soon. See Also: AI beats human again – this time in a team-based strategy game Build your first Reinforcement learning agent in Keras A new Stanford artificial intelligence camera uses a hybrid optical-electronic CNN for rapid decision making
Read more
  • 0
  • 0
  • 3783
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-cloud-filestore-a-new-high-performance-storage-option-by-google-cloud-platform
Vijin Boricha
27 Jun 2018
3 min read
Save for later

Cloud Filestore: A new high performance storage option by Google Cloud Platform

Vijin Boricha
27 Jun 2018
3 min read
Google recently came up with a new storage option for developers in its cloud. Cloud Filestore which is in its beta will launch next month according to the Google Cloud Platform Blog. Applications that require a filesystem interface and a shared filesystem for data can leverage this file storage service. It provides a fully managed  Network Attached Storage (NAS) service to effectively integrate with Google Compute Engine and Kubernetes Engine instances. Developers can leverage the abilities of Filestore for high performing file-based workloads. Now enterprises can easily run applications that depend on traditional file system interface with Google Cloud Platform. Traditionally, if applications needed a standard file system, developers would have to improvise a file server with a persistent disk. Filestore does away with traditional methods and allows GCP developers to spin-up storage as needed. Filestore offers high throughput, low latency and high IOPS (Input/output operations per second). This service is available in two tiers; premium and standard. The premium tier costs $0.30/GB/month and promises a max throughput of 700 MB/s and 30,000 max IOPS. The standard tier costs $0.20/GB/month with 180 MB/s max throughput and 5,000 max IOPS. A snapshot of Filestore features Filestore was introduced at the Los Angeles region launch and majorly focused on the entertainment and media industries, where there is a great need for shared file systems for enterprise applications. But this service is not limited only to the media industry, other industries that rely on similar enterprise applications can also benefit from this service. Benefits of using Filestore A lightning speed experience Filestore provides high IOPS for latency sensitive workloads such as content management systems, databases, random i/o, or other metadata intensive applications. This further results in a minimal variability in performance. Consistent  performance throughout Cloud Filestore ensures that one pays a predictable price for predictable performance. Users can independently choose the preferred IOPS--standard or premium-- and storage capacity with Filestore. With this option to choose from, users can fine tune their filesystem for a particular workload. One will also experience consistent performance for a particular workload over time. Simplicity at its best Cloud Filestore, a fully managed, NoOps service, is integrated with the rest of the Google Cloud portfolio. One can easily mount Filestore volumes on Compute Engine VMs. Filestore is tightly integrated with Google Kubernetes Engine, which allows containers to refer the same shared data. To know more about this exciting release, visit Cloud Filestore official website. Related Links AT&T combines with Google cloud to deliver cloud networking at scale What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 3782

article-image-google-introduces-season-of-docs-that-will-connect-technical-writers-and-mentors-with-open-source-projects
Amrata Joshi
13 Mar 2019
2 min read
Save for later

Google introduces Season of Docs that will connect technical writers and mentors with open source projects

Amrata Joshi
13 Mar 2019
2 min read
Just two days ago, the team at Google announced Season of Docs, a new program which will connect technical writers with open source projects. Season of Docs will help in bringing technical writers and open source projects together in order to work on open source documentation. https://twitter.com/GoogleOSS/status/1105138318826627072 According to Open Source Survey, documentation is valued in open source communities but it is still difficult to work on it. A person dealing with the documentation needs to know how to structure a documentation site so that people can easily understand the content and only technical writers can do that. Another plus point is that they are aware of the procedures of writing docs that can fit the needs of their audience. Technical writers can help in optimizing a community’s processes for open source contribution and onboarding new contributors. With Season of Docs, technical writers can spend a few months working closely with open source communities. Writes can work with their chosen open source project and also explore the latest technologies. Mentors from open source organizations can share their knowledge based on their communities’ processes and tools. The technical writers and mentors together can build a new doc set and improve the structure of the existing docs. They can also work on tutorials and further improve contribution processes and guides. According to the team, this project will raise awareness about open source, docs, and technical writing. The open source organizations can apply for participating in Season of Docs starting from 2nd to 23rd of April. Google will then publish the list of accepted mentoring organizations, along with their ideas for documentation projects from 30th April. In July, Google will announce the accepted technical writer projects. The technical writers will get a chance to work with mentors on the accepted projects and submit their work between 2nd September and 29th November. Google will then publish the list of successfully completed projects by 10th December. To know more about this news, check out Google’s blog post. Google Cloud Console Incident Resolved! Google confirms it paid $135 million as exit packages to senior execs accused of sexual harassment Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias  
Read more
  • 0
  • 0
  • 3781

article-image-visual-studio-code-july-2018-release-version-1-26-is-out
Savia Lobo
14 Aug 2018
3 min read
Save for later

Visual Studio code July 2018 release, version 1.26 is out!

Savia Lobo
14 Aug 2018
3 min read
The July 2018 release of Visual Studio code 1.26 version is now available. This version includes new features for navigation, how to apply a quick fix to any problem, managing extensions and much more. What’s new in the Visual Studio code 1.26? Breadcrumbs The Visual studio editor now has a navigation bar above its contents called Breadcrumbs. It displays the current location and allows quick navigation between symbols and files. Breadcrumb navigation allows one to jump to symbols and files in their workspace. Quick Fixes from Problems panel Now one can apply Quick code fixes from the Problems panel while reviewing warning and errors. When a problem entry is hovered or selected, the respective Quick Fixes are shown via a light bulb indicator. Quick Fixes can be applied either by clicking on the light bulb or by opening the context menu for the problem entry. User setup on Windows The user setup package for Windows, announced in the previous release, is now available. This setup does not require Administrator privileges while installation. It also provides a smoother background update experience. Current users of the system-wide Windows setup will be prompted to switch to the user setup. New users will be directed towards using it by default via Visual Studio code Download page. Terminal column selection Column selection is now supported within the Integrated Terminal via Alt+click. Add all missing imports with a single quick fix The Add missing import Quick Fix can now be applied to all missing imports in a JavaScript/TypeScript file. This Quick Fix requires only a single action to add all missing imports in a JavaScript/TypeScript file. JSX tag completion Now one can work with JSX tags in JavaScript/TypeScript similar to that in HTML. The JSX tags are now closed automatically when you type ‘>’ in a JavaScript or TypeScript file. Auto closing of tags can be disabled by setting "javascript.autoClosingTags": false and "typescript.autoClosingTags": false. Better JS/TS error reporting The TypeScript team has done a lot of work to make JavaScript and TypeScript error messages smarter and clearer. Some error messages now include links to relevant locations in the source code. Improved extension search This release has added an IntelliSense autocompletion to the extension searchfield making it easier. This will help in refining extension searches to filter results based on things like category and install state; or sort results by name, rating, or install count. Extension Pack management Extension Pack management has been improved in this release. An Extension Pack is installed, uninstalled, enabled or disabled always as a single pack. One can now uninstall or disable an extension belonging to an Extension Pack without requiring to uninstall or disable the entire Extension Pack. One can easily manage Extension Packs as a single unit or by individual extension. There is also a new Extension Pack tab which displays which extensions are bundled in the Extension Pack. Preview: Settings editor This version includes a preview of GUI for editing settings. To try it out, one can go to Preferences: Open Settings (Preview) command. It contains rich settings description display, "Table of Contents" tracks scrolling, and much more. Read more about these features in detail on the Visual Studio Code July 2018 version 1.26 release notes. Microsoft releases the Python Language Server in Visual Studio Debugging Xamarin Application on Visual Studio [Tutorial] Visual Studio 2019: New features you should expect to see
Read more
  • 0
  • 0
  • 3777

article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 3772
article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3769

article-image-scylladb-announces-scylla-3-0-a-nosql-database-surpassing-apache-cassandra-in-features
Prasad Ramesh
09 Nov 2018
2 min read
Save for later

ScyllaDB announces Scylla 3.0, a NoSQL database surpassing Apache Cassandra in features

Prasad Ramesh
09 Nov 2018
2 min read
ScyllaDB announced Scylla 3.0, a NoSQL database at the Scylla Summit 2018 this week. Scylla is written in C++ and now has 10x the throughput of Apache Cassandra. New features in Scylla 3.0 This release is a milestone for Scylla as it surpasses Apache Cassandra in features. Concurrent OLTP and OLAP support Scylla 3.0 enables its users to safely balance real-time operational workloads with big data analytical workloads all within a single database cluster. Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) have very different approaches to access data. OLTP encompasses many small and varied transactions. This includes mixed writes, updates, and reads which have a high sensitivity to latency. OLAP highlights on the throughput of broad scans spanning across datasets. With the addition of capabilities that isolate workloads, Scylla uniquely supports simultaneous OLTP and OLAP workloads while maintaining low latency and high throughput. Materialized views are production-ready Materialized views was an experimental feature for a long time in Scylla. It is now included in the production-ready versions. Materialized views are designed to enable automatic server-side table denormalization. One thing to note is that the Apache Cassandra community reverted materialized views from production-ready Cassandra to an experimental feature in 2017. Secondary indexes This is another feature that is now production-ready with the Scylla 3.0 release. These global secondary indexes can scale to any clusters of any size. This is unlike the local-indexing approach adopted by Apache Cassandra. Secondary indexes allow users to query data via non-primary key columns. Cassandra 3.x file format compatibility Scylla 3.0 includes support for Apache Cassandra 3.x compatible format (SSTable). This improves performance and reduces storage volume by three times. With a shared-nothing approach, Scylla has increased throughput and storage capacity 10x that of Apache Cassandra. Scylla Open Source 3.0 has a close-to-the-hardware design to use modern servers optimally. It is written from scratch in C++ for significant improvements in areas concerning throughput and latency. Scylla consistently achieves 99% tail latency of less than 1 millisecond. To know more about Scylla, visit the ScyllaDB website. Why MongoDB is the most popular NoSQL database today TimescaleDB 1.0 officially released PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation
Read more
  • 0
  • 0
  • 3766

article-image-google-announces-glass-enterprise-edition-2-an-enterprise-based-augmented-reality-headset
Amrata Joshi
21 May 2019
3 min read
Save for later

Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset

Amrata Joshi
21 May 2019
3 min read
Today, the team at Google has announced a new version of its Google Glass, called Google Glass Enterprise Edition 2, an augmented reality headset. Glass Enterprise Edition 2 is now an official Google product. https://youtu.be/5IK-zU51MU4 Glass Enterprise Edition has been useful for workers in a variety of industries ranging from logistics to manufacturing, to field services. It helps workers for accessing checklists, view instructions or sending inspection photos or videos, etc. This headset is no more under Google’s parent company Alphabet’s X “Moonshot Factory”. The official blog reads, “Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.” https://twitter.com/Theteamatx/status/1130504636501090305 https://twitter.com/jetscott/status/1130506213379235840 Glass Enterprise Edition 2 helps businesses to improve the efficiency of their employees. It costs $999, and it’s not being sold directly to consumers. Features of Glass Enterprise Edition 2 An improved camera with good performance and quality that builds on an existing first person video streaming and collaboration features. Features a new processor built on the Qualcomm Snapdragon XR1 platform. It provides a powerful multicore CPU (central processing unit) and a new artificial intelligence engine. Comes with a USB-C port for fast charging It also features a thicker, bulkier design, which helps to fit a larger 820mAh battery compared to the original's 570mAh. It also helps in power savings, enhancing the performance and support for computer vision and advanced machine learning capabilities. Google team further mentions, “We’ve also partnered with Smith Optics to make Glass-compatible safety frames for different types of demanding work environments, like manufacturing floors and maintenance facilities.” Glass Enterprise Edition 2 is built on Android which makes it easier for customers to integrate the services and APIs they already use. And also supports Android Enterprise Mobile Device Management in order to scale deployments. Even other big tech companies like Microsoft, Vuzix, and Epson are also working towards business-focused augmented reality glasses and make their positions strong in this league. To know more about this news, check out the official blog post by Google. Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing  
Read more
  • 0
  • 0
  • 3765
article-image-chromeos-is-ready-for-web-development-a-talk-by-dan-dascalescu-at-the-chrome-web-summit-2018
Sugandha Lahoti
15 Nov 2018
3 min read
Save for later

“ChromeOS is ready for web development” - A talk by Dan Dascalescu at the Chrome Web Summit 2018

Sugandha Lahoti
15 Nov 2018
3 min read
At the Chrome Web Summit 2018, Dan Dascalescu, Partner Developer Advocate at Google provided a high-level overview of ChromeOS and discussed Chrome’s core and new features available to web developers. Topics included best practices for web development, including Progressive Web Apps, and optimizing input and touch for tablets while having desktop users in mind. He specified that Chromebooks are convergence machines that run Linux, Android, and Google Play natively without emulation. He explained why ChromeOS can be a good choice for web developers. It not only powers devices from sticks to tablets to desktops, but it can also run web, Android, and now Linux applications. ChromeOS brings together your own development workflow with a variety of form factors from mobiles, tablets, desktop, and browsers on Android and Linux. Run Linux apps on ChromeOS with Crostini Stephen Barber, an engineer on ChromeOS described Chrome’s container architecture which is based on Chrome’s principle of safety, security, and reliability.  By using lightweight containers and hardware virtualization support, Android and Linux code run natively in ChromeOS. Developers can run Linux apps on ChromeOS through Project Crostini. Crostini is based on Debian stable and uses both virtualization and containers to provide security in depth. For now, they are starting out targeting web developers by providing integration features like port forwarding to localhost as a secure origin. They also provide a penguin.linux.test DNS alias, to treat a container like a separate system. For supporting more developer workflows than just web, they are soon providing USB, GPU, audio, FUSE, and file sharing support in upcoming releases. Dan also shared how Crostini is actually used for developing web apps. He demonstrated how you can easily install Linux on your Chromebook. Although Crostini is still in development, most things work as expected. Developers can run IDEs, databases like MongoDB, or MySQL. Anything can be installed with an -apt. It also has a terminal. Dan also mentioned Carlo, which is a Google project that is essentially a helpful node app framework. It provides applications with Chrome rendering capabilities. It uses a locally detected instance of chrome and it connects to your process pipe and then exposes the high-level API to render in Chrome from your NodeScript. If you don’t need low-level features, you can make your app as a PWA which works without a LaunchBar once installed in ChromeOS. Windows Chrome desktop PWA support will be available from Chrome 70+ and Mac from Chrome 72+. Dan also conducted a demo on how to run a PWA. These were the steps: Set up Crostini Install the development environment (node, npm, VSCode) Checkout a PWA (Squoosh) from GitHub Open in VSCode Run the web server Open PWA from Linux and Android browsers He also provided guidance on optimizing forms, handling touch interactions, pointer events, and how to set up remote debugging. What does the future look like for ChromeOS? Chrome team is on improving the desktop PWA support. This includes support for keyboard shortcuts, badging for the launch icon, and link capturing. They are also working on low-latency canvas contexts which are introduced in Chrome 71 Beta. This context uses OpenGLES for rastering, writes directly to the Front Buffer, which bypasses several steps of the rendering process but risks tearing. It is used mainly for high-level interactive apps. View the full talk on YouTube. Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications.
Read more
  • 0
  • 0
  • 3765

article-image-google-cloud-and-go-jeks-announce-feast-a-new-and-open-source-feature-store-for-machine-learning
Natasha Mathur
21 Jan 2019
3 min read
Save for later

Google cloud and GO-JEK’s announce Feast, a new and open source feature store for machine learning

Natasha Mathur
21 Jan 2019
3 min read
Google Cloud announced the release of Feast, a new open source feature store that helps organizations to better manage, store, and discover new features for their machine learning projects, last week. Feast, a collaboration project between Google Cloud and GO-JEK (an Indonesian tech startup) is an open, extensible, and a unified platform for feature storage. “Feast is an essential component in building end-to-end machine learning systems at GO-JEK. We’re very excited to release it to the open source community,” says Peter Richens, Senior Data Scientist at GO-JEK. It has been developed with an aim to find solutions for common challenges faced by Machine Learning Development teams. Some of these common challenges include: Machine Learning features not being reused (features representing similar business concepts get redeveloped many times when existing work from other teams could have been reused). Feature definitions vary (teams define features differently and many times there is no easy access to the documentation of a feature). Hard to serve up-to-date features (teams are hesitant in using real-time data). Inconsistency between training and serving (training requires historical data, whereas prediction models require the latest values. When data is broken down into various independent systems, it leads to inconsistencies as the systems then require separate tooling). Feast gets rid of these challenges by providing teams with a centralized platform that allows teams to easily reuse the features developed by another team across different projects. Also, as you add more features to the store, it becomes cheaper to build models Feast Apart from that, Feast manages the ingestion of data by unifying it from both batch and streaming sources (using Apache Beam) into the feature warehouse and feature serving stores. Users can then query features in the warehouse using the same set of feature identifiers. It also allows easy access to historical feature data for its users, which in turn, can be used to produce datasets for training models. Moreover,  Feast allows teams to capture documentation, metadata and metrics about features, allowing teams to communicate clearly about these features. Feast aims to be deployable on Kubeflow in the future and would get integrated seamlessly with other Kubeflow components such as a Python SDK for use with Kubeflow's Jupyter notebooks, and Kubeflow Pipelines. This is because Kubeflow focuses on improving packaging, training, serving, orchestration, and evaluation of models. “We hope that Feast can act as a bridge between your data engineering and machine learning teams”, says the Feast team. For more information, check out the official Google Cloud announcement. Watson-CoreML : IBM and Apple’s new machine learning collaboration project Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google
Read more
  • 0
  • 0
  • 3764