Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-faunadb-brings-its-serverless-database-to-netlify-to-help-developers-create-apps
Vincy Davis
10 Sep 2019
3 min read
Save for later

FaunaDB brings its serverless database to Netlify to help developers create apps

Vincy Davis
10 Sep 2019
3 min read
Today, Fauna, announced the integration of its serverless cloud database FaunaDB with Netlify to help developers build and deploy modern stateful serverless apps. As part of this new integration, it will also integrate with Netlify O-Auth and provide users with single sign-on (SSO) access to their database through FaunaDB Cloud console or Shell. The FaunaDB integration with Netlify will increase the productivity of users as data will now be immediately available without any additional provisioning steps. This has been a long withstanding demand from the JAMStack community as users used to find this process inconvenient. The CEO of Fauna, Evan Weaver says, “This integration is significant for developers, who by and large are moving to serverless platforms to build next-generation applications, yet many of them don’t have experience building and provisioning databases. Users also benefit because they can now build an app with a full-featured version of FaunaDB and easily deploy it on the Netlify platform.” Read Also: FaunaDB now offers a “Managed Serverless” service combining Fauna’s serverless database with a managed solution On the other hand, through this end-to-end integration, Netlify users will also be able to create serverless database instances from within the Netlify Platform. They can also log in to the FaunaDB Cloud Console with their Netlify account credentials. Matt Biilmann, the Netlify CEO says, “Now our users can use FaunaDB as a stateful backend for their apps with no additional provisioning. They can also test and iterate within it’s generous free tier, and transparently scale as the project achieves critical mass. The new FaunaDB Add-on is a great enhancement to our platform.” How will users benefit with the FaunaDB add-on for Netlify Users will be able to instantly create a FaunaDB database instance from within the Netlify development environment. Query data via GraphQL or use the Fauna Query Language (FQL) for complex functions. Data can be accessed using relational, document, graph and temporal models. The full range of FaunaDB’s capabilities like built-in authentication, transparent scalability and multi-tenancy is available for users An existing Netlify credentials via O-Auth can be used to directly login to a FaunaDB account. Database instances can be managed by the add-on through FaunaDB Cloud Console and Shell for easy use. Read Also: Fauna announces Jepsen results for FaunaDB 2.5.4 and 2.6.0 Latest news in Data Google open sources their differential privacy library to help protect user’s private data What can you expect at NeurIPS 2019? Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case
Read more
  • 0
  • 0
  • 2832

article-image-google-faces-multiple-scrutiny-from-the-irish-dpc-ftc-and-an-antitrust-probe-by-us-state-attorneys-over-its-data-collection-and-advertising-practices
Savia Lobo
09 Sep 2019
5 min read
Save for later

Google faces multiple scrutiny from the Irish DPC, FTC, and an antitrust probe by US state attorneys over its data collection and advertising practices

Savia Lobo
09 Sep 2019
5 min read
Google has been under scrutiny for its questionable data collection and advertising practices in recent times. Google has been previously hit by three antitrust fines by the EU, with a total antitrust bill amount of around $9.3 billion, till date. Today, more than 40 state attorney generals will launch a separate antitrust investigation targeting Google and its advertising practices. Last week, evidence from an investigation on how Google uses secret web pages to collect user data and expose this information to targeted advertisers were submitted to the Irish Data Protection Commission, who is the main watchdog over Google in the European Union. Also, based on an investigation launched into YouTube by the Federal Trade Commission earlier this year, Google and YouTube have been fined a penalty of $170M to settle allegations that it broke federal law by collecting children's personal information via YouTube Kids. Over 40 State Attorneys General open up antitrust investigations into Google The state watchdogs are initiating antitrust investigations against Silicon Valley’s largest companies, including Google and Facebook, probing whether they undermine rivals and harm consumers, according to The Washington Post. Today, more than 40 attorneys general are expected to launch a separate antitrust investigation targeting Google and its advertising practices subject to the US Supreme Court. Details of this investigation are unknown; however, according to The Wall Street Journal, the attorneys will focus on Google’s impact on digital advertising markets. On Friday, New York’s attorney general, Letitia James also announced that the attorneys general of eight states and the District of Columbia are launching an antitrust investigation into Facebook. https://twitter.com/NewYorkStateAG/status/1169942938023071744 Keith Ellison, attorney general from Minnesota who is signing on to the effort to probe Google, said, “The growth of these [tech] companies has outpaced our ability to regulate them in a way that enhances competition.” We will update this space once the antitrust investigations into Google are initiated. Irish DPC to investigate whether Google secretly feeds users’ data to advertisers An investigation done by Johnny Ryan, chief policy officer for the web browser, Brave, revealed that Google used hidden secret web pages to collect user data and create profiles exposing users personal information to targeted advertisers. In May the DPC opened an investigation into Google's Authorized Buyers real-time bidding (RTB) ad exchange. This exchange connects ad buyers with millions of websites selling their inventory. Ryan filed a GDPR complaint in Ireland over Google's RTB system in 2018, arguing that Google and ad companies expose personal data during RTB bid requests on sites that use Google's behavioral advertising. In his recent evidence, Ryan discovered the secret web pages when he monitored the trading of his personal data on Google’s ad exchange, Authorized Buyers. He found that Google “had labelled him with an identifying tracker that it fed to third-party companies that logged on to a hidden web page. The page showed no content but had a unique address that linked it to Mr Ryan’s browsing activity,” The Financial Times reports. Google allowed the advertisers to combine information about him through hidden "push" pages, which are not visible to web users and could lead to them more easily identifying people online, the Telegraph said. "This constant leaking of personal data, that seems to be happening constantly, needs to be urgently addressed by regulators," Ryan told the Telegraph. He said that “the data compiled by users can then be shared by companies without Google's knowledge, allowing them to more easily build and keep virtual profiles of Google's users without their consent,” the Telegraph further reported. To know about this story, read our detailed coverage of Brave’s findings: “Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case”.  FTC scrutiny leads to Google and YouTube paying $170 million penalty for violating Children’s online privacy In June this year, the Federal Trade Commission (FTC) launched an investigation into YouTube over mishandling children’s private data. The investigation was triggered by complaints from children’s health and privacy groups, which said, YouTube improperly collected data from kids using the video service, thus violating the Children’s Online Privacy Protection Act, a 1998 law known as COPPA that forbids the tracking and targeting of users younger than age 13. Also Read: FTC to investigate YouTube over mishandling children’s data privacy On September 4, the FTC said that YouTube, and its parent company, Google will pay a penalty of $170 million to settle allegations. YouTube said in a statement on Wednesday last week that in four months it would begin treating all data collected from people watching children’s content as if it came from a child. “This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service,” YouTube said on its blog. FTC Chairman Joe Simons said, “No other company in America is subject to these types of requirements and they will impose significant costs on YouTube.” According to Reuters, “FTC’s Bureau of Consumer Protection director Andrew Smith told reporters that the $170 million settlement was based on revenues from data collected, times a multiplier.”  New York Attorney General Letitia James said, “Google and YouTube knowingly and illegally monitored, tracked, and served targeted ads to young children just to keep advertising dollars rolling in.” In a separate statement, Simons and FTC Commissioner Christine Wilson said the settlement will require Google and YouTube to create a system "through which content creators must self-designate if they are child-directed. This obligation exceeds what any third party in the marketplace currently is required to do." To know more about this news in detail, read FTC and New York Attorney General’s statement. Other interesting news Google open sources their differential privacy library to help protect user’s private data What can you expect at NeurIPS 2019? Key Skills every Database Programmer should have
Read more
  • 0
  • 0
  • 1702

article-image-google-open-sources-their-differential-privacy-library-to-help-protect-users-private-data
Vincy Davis
06 Sep 2019
5 min read
Save for later

Google open sources their differential privacy library to help protect user’s private data

Vincy Davis
06 Sep 2019
5 min read
Yesterday, tending on the importance of strong privacy protections in firms, Google open-sourced a differential privacy library which is used by them in their core products. Their approach is an end-to-end implementation of differentially private query engine and is generic and scalable. Basically, developers can use this library to build tools that can work with aggregate data without revealing personally identifiable information. According to Miguel Guevara, the product manager of privacy and data protection at Google, “Differentially-private data analysis is used by an organization to sort through the majority of their data and safeguard them in such a way that no individual’s data is distinguished or re-identified. This approach can be used for various purposes like focusing on features that can be particularly difficult to execute from scratch.” Google differential privacy library differentiates private aggregations on databases, even when individuals can each be associated with arbitrarily many rows. The company has been using the differential privacy algorithm to create supportive features like “how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi” says Guevara in the official blog post. Google researchers have published their findings in a research paper. The paper describes a  C++ library of ε-differentially private algorithms, which can be used to produce aggregate statistics over numeric data sets containing private or sensitive information. The researchers have also provided a stochastic tester to check the correctness of the algorithms. One of the researchers explains the motive behind this library on Twitter. He says, “The main focus of the paper is to explain how to protect *users* with differential privacy, as opposed to individual records. So much of the existing literature implicitly assumes that each user is associated to only one record. It's rarely true in practice!” Key features of the differential privacy library Statistical functions: The library can be used by developers to compute Count, Sum, Mean, Variance, Standard deviation, and Order statistics (including min, max, and median). Rigorous testing: The differential privacy library includes a manual and extensible stochastic testing. The stochastic framework produces a database depending on the result of differential privacy. It contains four components such as database generation, search procedure, output generation, and predicate verification. The researchers have open-sourced the ‘Stochastic Differential Privacy Model Checker library’ for reproducibility. Ready to use: The differential privacy library uses the common Structured Query Language (SQL) extension which can capture most data analysis tasks based on aggregations. Modular: The differential privacy library has been extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management. It can also be extended to handle end-to-end user-level differential privacy testing. How does the differentially private SQL work with bounded user contribution The Google researchers have implemented the differential privacy (DP) query engine on a collection of custom SQL aggregation operators and a query rewriter. The SQL engine tracks the user ID metadata to invoke the DP query rewriter and the query rewriter is used to perform anonymization semantics validation and enforcement. The query rewriter then classifies the queries into two steps. The first step validates the table subqueries and the second step samples the fixed number of partially aggregated rows for each user. This step assists in limiting the user contribution across partitions. Finally, the system computes a cross-user DP aggregation which contributes to each GROUP BY partition and limits the user contribution within partitions. The paper states, “Adjusting query semantics is necessary to ensure that, for each partition, the cross-user aggregations receive only one input row per user.” In this way, the developed differentially private SQL system captures most of the data analysis tasks using aggregations. The mechanisms implemented in the system uses a stochastic checker to prevent regression and increase the quality of privacy guaranteed. Though the algorithms presented in the paper are simple, the researchers maintain that based on the empirical evidence the approach is useful, robust and scalable. In the future, researchers are hoping to see usability studies to test the success of the methods. In addition, they see room for significant accuracy improvements, using Gaussian noise and better composition theorems. Many developers have appreciated that Google open-sourcedopen sourced its differential privacy library for others. https://twitter.com/_rickkelly/status/1169605755898515457 https://twitter.com/mattcutts/status/1169753461468086273 In contrast, many people on Hacker News are not impressed with Google’s initiative and feel that they are misleading users with this announcement. One of the comments read, “Fundamentally, Google's initiative on differential privacy is motivated by a desire to not lose data-based ad targeting while trying to hinder the real solution: Blocking data collection entirely and letting their business fail. In a world where Google is now hurting content creators and site owners more than it is helping them, I see no reason to help Google via differential privacy when outright blocking tracking data is a viable solution.” You can check out the differential privacy Github page and the research paper for more information on Google’s research. Latest Google News Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Android 10 releases with gesture navigation, dark theme, smart reply, live captioning, privacy improvements and updates to security Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters
Read more
  • 0
  • 0
  • 4123
Visually different images

article-image-mongodb-atlas-will-be-available-on-microsoft-azure-marketplace-and-will-be-a-part-of-microsofts-partner-reported-program
Amrata Joshi
04 Sep 2019
2 min read
Save for later

MongoDB Atlas will be available on Microsoft Azure Marketplace and will be a part of Microsoft’s Partner Reported program

Amrata Joshi
04 Sep 2019
2 min read
Yesterday, the team at MongoDB, the general-purpose data platform announced the availability of MongoDB Atlas on Microsoft Azure Marketplace. The team further announced that they are set to be a part of Microsoft’s strategic Partner Reported ACR co-sell program. https://twitter.com/MongoDB/status/1168946141200883713 MongoDB Atlas on Azure integrates with Azure services including Azure Databricks, PowerBI, and Sitecore on Azure. With the availability of MongoDB Atlas on Azure Marketplace it will now be easy for the established Azure customers to purchase MongoDB Atlas. Also, the cost for Atlas will be integrated into a customer’s Azure bill resulting into a single payment.  The Atlas is now available across 26 Azure regions and provides service to thousands of customers that are dependent on MongoDB Atlas for driving their business.  Dev Ittycheria, President and CEO, MongoDB, said, “Microsoft has been a leader in making it easier for customers to consume and pay for cloud services, which are driving transformative innovations across many organizations.”  Ittycheria further added, “We are excited about the latest step in our strategic go-to-market partnership with Microsoft which will help bring MongoDB Atlas to the growing ecosystem of Azure Marketplace customers.” Scott Guthrie, Executive Vice President of Cloud and AI, Microsoft said, “Since launching on Azure in 2017, MongoDB Atlas has been a popular service running on Azure. Today’s announcement will make it even easier for customers to consume Atlas on Azure through the Azure Marketplace. We are committed to working alongside partners like MongoDB to give our joint customers best of breed choice in technology that meets their unique business demands.” What’s new in data this week? How to learn data science: from data mining to machine learning LXD releases Dqlite 1.0, a C library to implement an embeddable, persistent SQL database engine with Raft consensus After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption  
Read more
  • 0
  • 0
  • 3136

article-image-lxd-releases-dqlite-1-0-a-c-library-to-implement-an-embeddable-persistent-sql-database-engine-with-raft-consensus
Bhagyashree R
04 Sep 2019
5 min read
Save for later

LXD releases Dqlite 1.0, a C library to implement an embeddable, persistent SQL database engine with Raft consensus

Bhagyashree R
04 Sep 2019
5 min read
Dqlite (distributed SQLite) is created by the LXD team at Canonical, the company behind Ubuntu. It is a library written in C that implements a “fast, embedded, persistent SQL database” engine, which offers high-availability and automatic failover. Last week, the team released its Dqlite 1.0 version. It is open-sourced under Apache 2.0 and runs on ARM, X86, POWER, and IBM Z architectures. Dqlite is written in C to provide maximum cross-platform portability. Its first prototype was implemented in Go but was later rewritten in C because of some performance problems caused due to the way Go interoperates with C. The team explains, “Go considers a function call into C that lasts more than ~20 microseconds as a blocking system call, in that case it will put the goroutine running that C call in waiting queue and resuming it will effectively cause a context switch, degrading performance (since there were a lot of them happening). The added benefit of the rewrite in C is that it’s now easy to embed dqlite into project written in effectively any language, since all major languages have provisions to create C bindings.” How Dqlite works Dqlite extends SQLite with a network protocol that connects various instances of an application and also has them act as a highly-available cluster, without any dependency on external databases. To achieve this, it depends on C-Raft, an implementation of the Raft consensus algorithm in C. This not only provides high-performance transactional consensus and fault tolerance but also preserves SQLite’s efficiency and tiny footprint. To reach consensus, Raft uses the concept of an “elected leader.” In a Raft cluster, a server can either be a leader or a follower. The cluster can have only one elected leader that will be fully responsible for log replication on the followers. In the case of Dqlite, this means that only the leader can write new Write-Ahead Logging (WAL) frames. So, any attempt to perform a write transaction on a follower node will fail with an ErrNotLeader error, in which case clients will be required to retry against whoever is the new leader. The team recommends Dqlite for the cases when you don’t want any dependency on an external database, but want your application to be highly available, for instance, IoT and Edge devices. Currently, it is being used by the LXD system containers manager. It uses Dqlite to implement high-availability when running in cluster mode. Read also: LXD 3.15 releases with a switch to dqlite 1.0 branch, new hardware VLAN and MAC filtering on SR-IOV and more! What developers are saying about Dqlite This triggered a discussion on Hacker News. A developer recommended the usage of D or Rust for Dqlite’s implementation. “They could also use D or Rust for this. If borrow-checker is too much, Rust can still do automatic, memory management with other benefits remaining. Both also support letting specific modules be unsafe where performance is critical.” Read also: “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Others compared it with rqlite, which is a lightweight, distributed relational database that uses SQLite as its storage engine. One main difference that many pointed out was that Dqlite is a library, whereas, rqlite is a full application. Giving a more in-depth comparison between the two, a developer commented, “rqlite's replication is command based whereas dqlite is/was WAL frame-based -- so basically one ships the command and the other ships WAL frames. This distinction means that non-deterministic commands (ex. `RANDOM()`) will work differently.” Apart from these, Dqlite’s team also listed the difference between Dqlite and rqlite. Among the main differences are, Dqlite is “embeddable in any language that can interoperate with C.” It provides “full support for transactions” and there is “no need for statements to be deterministic.” A major point of discussion was its use cases. A user commented explaining where Dqlite can find its use, “So an easy use-case that springs to mind is any sort of distributed IoT device that needs to track state. So any industrial or consumer monitoring system with a centralized controller that would use this for data storage. Specifically, this enables the use of multiple nodes for high throughput imagine many, many, many sensors and a central controller streaming real-time data.” A developer who has used the Dqlite library shared their review, “I used Dqlite for a side project, which replicates some of the features of LXD. Was relatively easy to use, but Dqlite moves at some pace and trying to keep up is quite "interesting". Anyway once I do end up getting time, I'm sure it'll be advantageous to what I'm doing.” To read more about Dqlite, check out its official website. Other news in database GraphQL API is now generally available Amazon Aurora makes PostgreSQL Serverless generally available The road to Cassandra 4.0 – What does the future have in store?  
Read more
  • 0
  • 0
  • 2531

article-image-after-red-hat-homebrew-removes-mongodb-from-core-formulas-due-to-its-server-side-public-license-adoption
Vincy Davis
03 Sep 2019
3 min read
Save for later

After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption

Vincy Davis
03 Sep 2019
3 min read
In October, last year MongoDB announced that it’s switching to Server Side Public License (SSPL). Since then, Redhat dropped support for MongoDB in January from its Red Hat Enterprise Linux and Fedora. Now, Homebrew, a popular package manager for macOS has removed MongoDB from the Homebrew core formulas since MongoDB was migrated to a non open-source license. Yesterday, FX Coudert, a Homebrew member announced this news on Twitter. https://twitter.com/fxcoudert/status/1168493202762096643 In a post on GitHub, Coudert clearly mentions that MangoDB’s migration to ‘non open-source license’ is the reason behind this resolution. Since, SSPL is not OSI-approved, it cannot be included in homebrew-core. Also, mongodb and [email protected] do not build from source on any of the 3 macOS versions, so they are also removed along with mongodb 3, 3.2, and 3.4. He adds that it would make little sense to keep older, unmaintained versions. Coudert also added that the percona-server-mongodb which also comes under the SSPL is removed from the Homebrew core formulas. Upstream continues to maintain the custom Homebrew “official” tap for the latest versions of MongoDB. Earlier, Homebrew project leader, MikeMcQuaid had commented on Github that MongoDB was their 45th most popular formula and should not be removed as it will break things for many people. Coudert countered this by replying that since MongoDB is not open source anymore, it does not belong in Homebrew core. He added, that since upstream is providing a tap with their official version, users can have the latest (instead of our old unmaintained version). “We will have to remove it at some point, because it will bit rot and break. It's just a question of whether we do that now, or keep users with the old version for a bit longer,” he specified. MongoDB’s past controversies due to SSPL In January this year, MongoDB received its first major blow when Red Hat dropped MongoDB over concerns related to its SSPL. Tom Callaway, the University outreach Team lead at Red Hat had said that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be “Free” or “Open Source” causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk.” Subsequently, in February, Red Hat Satellite also decided to drop MongoDB and support PostgreSQL backend only. The Red Hat development team stated that PostgreSQL is a better solution in terms of the types of data and usage that Satellite requires. In March, following all these changes, MongoDB withdrew the SSPL from the Open Source Initiative’s approval process. It was finally decided that SSPL will only require commercial users to open source their modified code, which means that any other user can still modify and use MongoDB code for free. Check this space for new announcements and updates regarding Homebrew and MongoDB. Other related news in Tech How to ace a data science interview Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset ‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey
Read more
  • 0
  • 0
  • 3958
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-datadog-releases-ddsketch-fully-mergeable-relative-error-quantile-sketching-algorithm
Sugandha Lahoti
03 Sep 2019
4 min read
Save for later

Datadog releases DDSketch, a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees

Sugandha Lahoti
03 Sep 2019
4 min read
Datadog, the monitoring, and analytics platform released DDSketch (Distributed Distribution Sketch) which is a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees. It was presented at VLDB2019 in August. DDSketch is fully-mergeable and relative-error quantile sketching algorithm Per Wikipedia, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Calculating exact Quantiles can be expensive for both storage and network bandwidth. So, most monitoring systems compress the data into sketches and compute approximate quantiles. However, maintaining Quantile sketches has primarily been done on bounding the rank error of the sketch while using little memory. Unfortunately, for data sets with heavy tails, rank-error guarantees can return values with large relative errors. Also, quantile sketches should be mergeable which means that several combined sketches must be as accurate as a single sketch of the same data. These two problems are addressed in DDSketch which comes with formal guarantees, is also fully-mergeable and has relative-error sketching. The sketch is extremely fast as well as accurate and is currently being used by Datadog. How DDSketch works As mentioned earlier, DDSketch has relative error guarantees. This means it computes quantiles with a controlled relative error. For example, for a DDSketch with a relative accuracy guarantee set to 1% and expected quantile value set to 100, the computed quantile value is guaranteed to be between 99 and 101. If the expected quantile value is 1000, the computed quantile value is guaranteed to be between 990 and 1010. DDSketch works by mapping floating-point input values to bins and counting the number of values for each bin. The mapping to bins is handled by IndexMapping, while the underlying structure that keeps track of bin counts is Store. The memory size of the sketch depends on the range that is covered by the input values; the larger the range, the more bins are needed to keep track of the input values. As a rough estimate, when working on durations using standard parameters (mapping and store) with a relative accuracy of 2%, about 2.2kB (297 bins) are needed to cover values between 1 millisecond and 1 minute, and about 6.8kB (867 bins) to cover values between 1 nanosecond and 1 day. DDSketch implementations and comparisons Datadog has provided implementations of DDSketch in Java, Go, and Python. The Java implementation provides multiple versions of DDS. They have also compared DDSketch against the Java implementation of HDR Histogram, the Java implementation of the GKArray version of the GK sketch, as well as the Java implementation of the Moments sketch. HDR Histogram HDR Histogram is the only relative-error sketch in the literature. It has extremely fast insertion times (only requiring low-level binary operations), as the bucket sizes are optimized for insertion speed instead of size, and it is fully mergeable (though very slow). The main downside, the researchers say, is that it can only handle a bounded (though very large) range that might not be suitable for certain data sets. It also has no published guarantees, though the researchers agree that much of the analysis presented for DDSketch can be made to apply to a version of HDR Histogram that more closely resembles DDSketch with a slightly worse guarantee. Moments sketch The Moments sketch takes an entirely different approach by estimating the moments of the underlying distribution. It has notably fast merging times and is fully mergeable. The guaranteed accuracy, however, is only for the average rank error, unlike other sketches which have guarantees for the worst-case error (whether rank or relative) GK sketch Compared to GK, the relative accuracy of DDSketch is comparable for dense data sets, while for heavy-tailed data sets the improvement in accuracy can be measured in orders of magnitude. The rank error is also comparable to if not better than that of GK. Additionally, it is much faster in both insertion and merge. Note: All images are taken from the research paper. For more technical coverage, please read the research paper. In other related news, late August, Datadog announced that it has filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission relating to a proposed initial public offering of its Class A common stock. The firm listed a $100 million raise in its prospectus, a provisional number that will change when the company sets a price range for its equity. Other news in Tech Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?
Read more
  • 0
  • 0
  • 3365

article-image-reddit-experienced-an-outage-due-to-an-issue-in-its-hosting-provider-aws
Vincy Davis
02 Sep 2019
2 min read
Save for later

Reddit experienced an outage due to an issue in its hosting provider, AWS

Vincy Davis
02 Sep 2019
2 min read
On Saturday, Reddit status notified its users on Twitter that they are suffering an outage due to “elevated level of errors” at 06.09 PDT. During this time, users across U.S. East Coast and Northern Europe could not load the Reddit page. Bleeping computer reported that users from Europe (Finland, Spain, Italy, Portugal), Canada, the Philippines, and Australia faced trouble while loading the website. Downdetector.com registered a peak of almost 15,000 Reddit down reports on this particular day. Image Source: Bleeping computer Twitter was flooded with user messages questioning the status of Reddit. https://twitter.com/Majesti20934027/status/1167788217216765952 https://twitter.com/whoisanthracite/status/1167786716121509888 https://twitter.com/Blade67470/status/1167788111998390272 After some time at 07:53 PDT, Reddit tweeted that they have identified the issue behind the outage. https://twitter.com/redditstatus/status/1167812712262295552 Meanwhile, users trying to open any page on Reddit received messages that said, “Sorry, we couldn’t load posts for this page.” or “Sorry, for some reason Reddit can’t be reached.” Read Also: Reddit’s 2018 Transparency report includes copyright removals, restorations, and more! Finally, the outage was resolved at 12:36 PDT on the same day. Reddit tweeted, “Resolved: This incident has been resolved.” No further details have been posted by Reddit or AWS. Reddit status page reported that the Amazon AWS issue affected seven Reddit.com components including Desktop Web, Mobile Web, Native Mobile Apps, Vote Processing, Comment Processing, Spam Processing, and Modmail. A user commented on Bleeping computer’s post,“Looks like Elastic Block Store (EBS) and Relational Database Service (RDS) (and Workspaces, whatever that is) took a hit for US-EAST-1 at that time. From the status updates, maybe due to a big hardware failure. Perhaps Reddit has realized there is value in keeping a redundant stack running in a western region. They could have instantly mitigated the outage by flipping traffic with Route 53 to the healthy stack in this case.” Other recent outages Google services, ProtonMail, and ProtonVPN suffered an outage yesterday Stack Overflow suffered an outage yesterday EU’s satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime
Read more
  • 0
  • 0
  • 2708

article-image-google-researchers-weight-agnostic-neural-networks-perform-tasks-without-learning-weight-parameters
Fatema Patrawala
30 Aug 2019
4 min read
Save for later

Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters

Fatema Patrawala
30 Aug 2019
4 min read
On Tuesday, Adam Gaier, a student researcher and David Ha, Staff Research Scientist at the Google research team published a paper on Weight Agnostic Neural Networks (WANN) that can perform tasks even without learning the weight parameters. In “Weight Agnostic Neural Networks”, researchers present their first step towards searching networks with the neural net architectures that can already perform tasks, even when they use a random shared weight. The team writes, “Our motivation in this work is to question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. By exploring such neural network architectures, we present agents that can already perform well in their environment without the need to learn weight parameters.” The team looked at analogies of nature vs. nurture. They gave an example of certain precocial species in biology—who possess anti-predator behaviors from the moment of birth and can perform complex motor and sensory tasks without learning, Hence, researchers constructed network architectures that can perform well without training. The team has also open-sourced the code to reproduce WANN experiments in the broader research community. Researchers explored range of WANNs using topology search algorithm The team started with a population of minimal neural network architecture candidates, which have very few connections, and used a well-established topology search algorithm to evolve the architecture by adding single connections and single nodes one by one. Unlike traditional neural architecture search methods, where all of the weight parameters of new architectures need to be trained using a learning algorithm, the team took a simpler approach. To all candidate architectures the team first assigned a single shared weight value at each iteration, and then optimized to perform well over a wide range of shared weight values. In addition to exploring a range of weight agnostic neural networks, researchers also looked for network architectures that were only as complex as they need to be. They accomplished this by optimizing for both the performance of the networks and their complexity simultaneously, using techniques drawn from multi-objective optimization. Source: Google AI blog. Overview of Weight Agnostic Neural Network Search and corresponding operators for searching the space of network topologies. Training the WANN architectures Researchers believe that unlike traditional networks, WANNS can be easily trained by finding the best single shared weight parameter that maximizes its performance. They proved this with an example of a swing-up cartpole task using constant weights:   Source: Google AI blog. A WANN performing a Cartpole Swing-up task at various different weight parameters and fine tune weights As per the above figure, WANNs can perform tasks using a range of shared weight parameters. However, the performance is not comparable to a network that learns weights for each individual connection, normally done in network training. To improve performance, researchers used the WANN architecture, and the best shared weight to fine-tune the weights of each individual connection using a reinforcement learning algorithm, like how a normal neural network is trained. Created an ensemble of multiple distinct models of WANN architecture The researchers also believe that by using copies of the same WANN architecture, where each copy of the WANN is assigned a distinct weight value, they created an ensemble of multiple distinct models for the same task. And according to them this ensemble generally achieves better performance than a single model. They illustrated this with an example of an MNIST classifier: Source: Google AI blog The team conclude that a conventional network with random initialization will achieve ~10% accuracy on MNIST. While this particular network architecture that uses random weights when applied to MNIST achieves an accuracy of > 80%. However, when an ensemble of WANNs is used the accuracy increases to > 90%. The researchers hope that this work will serve as a stepping stone to discover novel fundamental neural network components such as convolutional networks in deep learning. To know more about this research, check out the official Google AI blog. What’s new in machine learning this week? DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe
Read more
  • 0
  • 0
  • 2879

article-image-hire-by-googlethe-next-product-killed-by-google-services-to-end-in-2020
Vincy Davis
29 Aug 2019
5 min read
Save for later

‘Hire by Google’, the next product killed by Google; services to end in 2020

Vincy Davis
29 Aug 2019
5 min read
Google has notified users in a support note that they are taking down the Hire by Google service on September 1, 2020. In the vague note, no particular reason has been specified by Google. It simply states, “While Hire has been successful, we’re focusing our resources on other products in the Google Cloud portfolio.” Launched in 2017, the Hire by Google service is an applicant tracking system aimed to assist small to medium businesses (SMBs) for candidate sourcing. Its integrated software (Google Search, Gmail, Google Calendar, Google Docs, Google Sheets and Google Hangouts) makes activities like applicant search, interview scheduling and feedback simpler. A profile on Google Hire, can make a candidate more trackable as recruiters and hiring managers can get more information about the candidate from web sites such as LinkedIn, GitHub, and others. Even an email communication with the candidate is tracked on the candidate profile available on Google Hire. Until now, Hire was only available to companies in the United States, United Kingdom and Canada. In the FAQs following the note, Google has said that no new functionalities will be added in the Hire product. It also states that until September 1, 2020, customers under contract will be provided support in accordance with the Technical Support Services Guidelines (TSS) of Hire. “After your next bill, there will be no additional charges for your standard usage of Hire up until the end of your contract term or September 1, 2020, whichever comes first”, adds the note. It also specifies that closing down of Hire will have no impact on user’s G Suite agreement. Which other Google products have been shut down Google’s decision to shut down its own projects is not new. Two months ago, Google announced that it was shutting down the Trips app which was a substitute for Inbox's trip bundling functionality. This news came after the community favorite Google Inbox was discontinued in March 2019. In April this year, Google also ceased and deleted all user accounts on its Google+ social network platform. Per Verge, the reason behind the closure of Google+ was the security liabilities the social network posed. It suffered two significant data leaks causing millions of Google+ users’ data at risk. Though Google stated that Google+ failed to meet the company’s expectations of user growth and mainstream pickup as the reason for its packup. In May, another popular Google product, Works with Nest was given an end date of August 30, 2019. This was the result of Google’s plan of action to bring all the Nest and Google Home products under one brand ‘Google Nest’. With an aim to make its smart home experience more secure and unified for users, all the Nest account users were asked to migrate to Google Accounts, as it is the only serving front-end for using products across Nest and Google. This decision of phasing out Works with Nest had made many Nest products users’ infuriated back then. Read Also: Turbo: Google’s new color palette for data visualization addresses shortcomings of the common rainbow palette, ‘Jet’ With this trend of killing its own products, Google is gaining a lot of negative campaigning. Many people are of the opinion that Google’s side projects cannot be trusted for long term adoption. A user on Hacker News comments, “What is humorous to me is that Google is hurting users who typically have the most influence over SaaS integrations at their company (managers) by taking away a tool that helped them deal with the part of their job most of them hate the most (hiring/recruiting). If it hasn't been obvious yet to managers watching this, Google's software is not a safe investment for you to make for your company. It is only a matter of time until you will suddenly have to divert your time to figuring out how to migrate away from a Good Tool to a Less Good Tool because Google built it well then took it away.  Swapping a tool like this is an abysmal resource sink for you and your company. This is not the first, second, third, fourth or even fifth time this has happened, but this one should hit close to home. Google's software is not a safe investment for you to make for your company.” Many are wondering if Hire was really successful as stated by Google, then what could be the reason behind its shut down. Another comment on Hacker News reads, “Why do they cancel this product? Are they losing profit over this? Were they working on any new features? If no new features are required, would it be such a hassle to just keep the product working without assigning engineers to it? Only support?” Interested users can read the FAQs in the Google support page to know more information. Google Chrome 76 now supports native lazy-loading Google confirms and fixes 193 security vulnerabilities in Android Q Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera
Read more
  • 0
  • 0
  • 2016
article-image-largest-women-in-tech-conference-grace-hopper-celebration-renounces-palantir-as-a-sponsor-due-to-concerns-over-its-work-with-the-ice
Sugandha Lahoti
29 Aug 2019
4 min read
Save for later

Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE

Sugandha Lahoti
29 Aug 2019
4 min read
Grace Hopper Celebration conference, which is the world's largest conference for women in tech has said that it has dropped Palantir as a sponsor due to its concerns with the United States' Immigration and Customs Enforcement (ICE). This news came after concerned civilians published a petition on Change.org demanding that AnitaB.org, the organization for women in computing that produces the Grace Hopper Celebration conf, should renounce Palantir as a sponsor. At the time of writing 326 people had signed the petition; their aim being to reach 500. The petition reads, “Funding well-respected and impactful events such as GHC is one of the ways in which Palantir can try to buy positive public sentiment. By accepting Palantir’s money, proudly displaying them as a sponsor, and giving them a platform to recruit, AnitaB.org is legitimizing Palantir's work with ICE to GHC's attendees, enabling ICE’s mission, and helping Palantir minimize its role in human rights abuses.” The petition called on AnitaB.org to: Drop Palantir as a sponsor for GHC 2019 and future conferences Release a statement denouncing the prior sponsorship and Palantir’s involvement with ICE Institute and publicly release an ethics vetting policy for future corporate sponsors and recruiters https://twitter.com/techworkersco/status/1166740206461964288 Several activists and women in tech had urged Grace Hopper Celebration to renounce Palantir as its sponsor. https://twitter.com/jrivanob/status/1166734671624822784 https://twitter.com/sarahmaranara/status/1163231777772703744 https://twitter.com/RileyMancuso/status/1157088427977904131 Following this open opprobrium, AnitaB.org Vice President of Business Development and Partnership Success, Robert Read released a statement yesterday: “At AnitaB.org we do our best to promote the basic rights and dignity of every person in all that we do, including our corporate sponsorship and events program. Palantir has been independently verified as providing direct technical assistance that enables the human rights abuses of asylum seekers and their children at US southern border detention centers. Therefore, at this time, Palantir will no longer be a sponsor of Grace Hopper Celebration 2019.” Prior to Grace Hopper Celebration, UC Berkeley’s Privacy Law Scholars Conference dropped Palantir as a sponsor. This was because of the discomfort of many in the community with the company's practices, including among the program committee that selects papers and awards. Lesbians Who Tech, a leading LGBTQ organization, followed suit, confirming their boycott of Palantir with The Verge. This was also because members of their community approached them to drop Palantir as a sponsor seeing its recent contract work with the US government. “Members of our community (the LGBTQ community) contacted us with concern around Palantir’s participation with the job fair,” a representative of Lesbians in tech said, “because of the recent news that the company’s software has been used to aid ICE in effort to gather, store, and search for data on undocumented immigrants, and reportedly playing a role in workplace raids.” Palantir is involved in conducting raids on immigrant communities as well as in enabling workplace raids: Mijente According to reports, Palantir’s mobile app FALCON is being used by ICE to carry out raids on immigrant communities as well as to enable workplace raids. In May this year, new documents released by Mijente, an advocacy organization, revealed that Palantir was responsible for the 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. As part of the operation, ICE arrested 443 people solely for being undocumented. Palantir's case management tool (Investigative Case Management) was shown to be used at the border to arrest undocumented people discovered in investigations of children who crossed the border alone, including the sponsors and family members of these children. Several open source communities, activists and developers have been strongly demonstrating against Palantir for their involvement with ICE. This includes Entropic, who is debating the idea of banning Palantir employees from participating in the project. Back in August 2018, the Lerna team had taken a strong stand against ICE by modifying its MIT license to ban companies who have collaborated with ICE from using Lerna. Last month, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Fairphone 3 launches as a sustainable smartphone alternative that supports easy module repairs. #Reactgate forces React leaders to confront the community’s toxic culture head on Stack Overflow faces backlash for removing the “Hot Meta Posts” section; community feels left out of decisions.
Read more
  • 0
  • 0
  • 2367

article-image-deepmind-introduces-openspiel-a-reinforcement-learning-based-framework-for-video-games
Savia Lobo
28 Aug 2019
3 min read
Save for later

DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games

Savia Lobo
28 Aug 2019
3 min read
A few days ago, researchers at DeepMind introduced OpenSpiel, a framework for writing games and algorithms for research in general reinforcement learning and search/planning in games. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. It also includes a branch of pure Swift in the swift subdirectory. In their paper, the researchers write, “We hope that OpenSpiel could have a similar effect on general RL in games as the Atari Learning Environment has had on single-agent RL.” Read Also: Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football OpenSpiel allows evaluating written games and algorithms on a variety of benchmark games as it includes implementations of over 20 different games types including simultaneous move, perfect and imperfect information games, gridworld games, an auction game, and several normal-form / matrix games, etc. It includes tools to analyze learning dynamics and other common evaluation metrics. It also supports n-player (single- and multi-agent) zero-sum, cooperative and general-sum, one-shot and sequential games, etc. OpenSpiel has been tested on Linux (Debian 10 and Ubuntu 19.04). However, the researchers have not tested the framework on MacOS or Windows. “since the code uses freely available tools, we do not anticipate any (major) problems compiling and running under other major platforms,” the researchers added. The purpose of OpenSpiel is to promote “general multiagent reinforcement learning across many different game types, in a similar way as general game-playing but with a heavy emphasis on learning and not in competition form,”  the researcher paper mentions. This framework is “designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad.” Read Also: DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Design constraints for OpenSpiel The two main design criteria that OpenSpiel is based on include: Simplicity: OpenSpiel provides easy-to-read, easy-to-use code that can be used to learn from and to build a prototype rather than a fully-optimized code that would require additional assumptions. Dependency-free: Researchers say, “dependencies can be problematic for long-term compatibility, maintenance, and ease-of-use.” Hence, the OpenSpiel framework does not introduce dependencies thus keeping it portable and easy to install. Swift OpenSpiel: A port to use Swift for TensorFlow The swift/ folder contains a port of OpenSpiel to use Swift for TensorFlow. This Swift port explores using a single programming language for the entire OpenSpiel environment, from game implementations to the algorithms and deep learning models. This Swift port is intended for serious research use. As the Swift for TensorFlow platform matures and gains additional capabilities (e.g. distributed training), expect the kinds of algorithms that are expressible and tractable to train to grow significantly. While OpenSpiel has some tools for visualization and evaluation, the α-Rank algorithm is also a tool. The α-Rank algorithm leverages evolutionary game theory to rank AI agents interacting in multiplayer games. OpenSpiel currently supports using α-Rank for both single-population (symmetric) and multi-population games. Developers are excited about this release and want to try out this framework. https://twitter.com/SMBrocklehurst/status/1166435811581202443 https://twitter.com/sharky6000/status/1166349178412261376 To know more about this news in detail, head over to the research paper. You can also check out the GitHub page. Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks
Read more
  • 0
  • 0
  • 4695

article-image-facebook-open-sources-hyperparameter-autotuning-for-fasttext-to-automatically-find-best-hyperparameters-for-your-dataset
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset

Amrata Joshi
27 Aug 2019
3 min read
Two years ago, the team at Facebook AI Research (FAIR) lab open-sourced fastText, a library that is used for building scalable solutions for text representation and classification. To make models work efficiently on datasets with large number of categories, finding the best hyperparameters is crucial. However, searching the best hyperparameters manually is difficult as the effect of each parameter varies from one dataset to another. For this, Facebook has developed an autotune feature in FastText that automatically finds the best hyperparameters for your dataset. Yesterday, they announced that they are open-sourcing the Hyperparameter autotuning feature for fastText library.  What are hyperparameters? Hyperparameters are the parameter whose values are fixed before the training process begins. They are the critical components of an application and they can be tuned in order to control how a machine learning algorithm behaves. Hence it is important to search for the best hyperparameters as the performance of an algorithm can be majorly dependent on the selection of these hyperparameters. The need for Hyperparameter autotuning It is difficult and time-consuming to search for the best hyperparameters manually, even for expert users. This new feature makes this task easier by automatically determining the best hyperparameters for building an efficient text classifier. A researcher can input the training data, a validation set and a time constraint to use autotuning. The researcher can also constrain the size of the final model with the help of compression techniques in fastText. Building a size-constrained text classifier can be useful for even deploying models on devices or in the cloud such that it becomes easier to maintain a small memory footprint.  With Hyperparameter autotuning, researchers can now easily build a memory-efficient classifier that can be used for various tasks, including language identification, sentiment analysis, tag prediction, spam detection, and topic classification. The team’s strategy of exploring various hyperparameters is inspired by existing tools, such as Nevergrad, but has been tailored to fastText for using the specific structure of models. The autotune feature explores hyperparameters by initially sampling in a large domain that shrinks around the best combinations over time.  It seems that this new feature could possibly be a competitor to Amazon SageMaker Automatic Model Tuning. In Amazon’s model, however, the user needs to select the hyperparameters required to be tuned, a range for each parameter to explore, and also the total number of training jobs. While Facebook’s Hyperparameter autotuning automatically selects the hyperparameters.  To know more about this news, check out Facebook’s official blog post. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans    
Read more
  • 0
  • 0
  • 2699
article-image-turbo-googles-new-color-palette-for-data-visualization-addresses-shortcomings-of-the-common-rainbow-palette-jet
Sugandha Lahoti
23 Aug 2019
4 min read
Save for later

Turbo: Google’s new color palette for data visualization addresses shortcomings of the common rainbow palette, 'Jet'

Sugandha Lahoti
23 Aug 2019
4 min read
Google has released a new color palette, which it has named Turbo to address some of the shortcomings of the current popular rainbow palette, Jet. These shortcomings, include false detail, banding, and color blindness ambiguity. According to the blog post, Turbo provides better data visualization depth perception. Their aim with Turbo is to provide a color map which is uniform and color blind-accessible, but also optimal for day to day tasks where the requirements are not as stringent. The blog post specifies that Turbo is meant to be used in cases where perceptual uniformity is not critical, but one still wants a high contrast, smooth visualization of the underlying data. Google Researchers created a simple interface to interactively adjust the sRGB curves using a 7-knot cubic spline while comparing the result on a selection of sample images as well as other well-known color maps. “This approach,” the blog post reads, “provides control while keeping the curve C2 continuous. The resulting color map is not “perceptually linear” in the quantitative sense, but it is more smooth than Jet, without introducing false detail.” Comparison of Turbo with other color maps Virdius and Inferno are two linear color maps that fix most issues of Jet and are generally recommended when false color is needed. However, some feel that it can be harsh on the eyes, which hampers visibility when used for extended periods. Turbo, on the other hand, mimics the lightness profile of Jet, going from low to high back down to low, without banding. Turbo’s lightness slope is generally double that of Viridis, allowing subtle changes to be more easily seen. “This is a valuable feature,” the researchers note, “since it greatly enhances detail when color can be used to disambiguate the low and high ends.” Lightness plots generated by converting the sRGB values to CIECAM02-UCS and displaying the lightness value (J) in greyscale. The black line traces the lightness value from the low end of the color map (left) to the high end (right). Source: Google blog The lightness plots show Viridis and Inferno plots to be linear and Jet’s plot to be erratic and peaky. Turbo’s had a similar asymmetric profile to Jet with the lows darker than the highs. Although the low-high-low curve increases detail, it comes at the cost of lightness ambiguity. This makes Turbo inappropriate for grayscale printing and for people with the rare case of achromatopsia (total color blindness). In the case of semantic layers, compared to Jet, Turbo is much more smooth and has no “false layers” due to banding. Turbo’s attention system prioritizes hue which makes it easy for Turbo to judge the differences in color than in lightness. Turbo’s color map can be used as a diverging colormap as well. The researchers tested Turbo using a color blindness simulator and found that for all conditions except Achromatopsia, the map remains distinguishable and smooth. NASA data viz lead argues Turbo comes with flaws Joshua Stevens, Data visualization and cartography lead at NASA has posted a detailed Twitter thread pointing out certain flaws with Google’s Turbo color map. He points out that “Color palettes should change linearly in lightness. However, Turbo admittedly does not do this. While it avoids the 'peaks' and banding of Jet, Turbo's luminance curve is still humped. Moreover, the slopes on either side are not equal, the curve is still irregular, and it starts out darker than it finishes.” He also contradicts Google’s statement of "our attention system prioritizes hue". The paper that Google links to clearly specifies that experimental results showed that brightness and saturation levels are more important than hue component in attracting attention.”. He clarifies further, “This is not to say that Turbo is not an improvement over Jet. It is! But there is too much known about visual perception to reimagine another rainbow. The effort is stellar, but IMO Turbo is a crutch that further slows adoption of more sensible palettes.” Google has made available the color map data and usage instructions for Python and C/C++. There is also a polynomial approximation, for cases where a look-up table may not be desirable. DeOldify: Colorising and restoring B&W images and videos using a NoGAN approach Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] Matplotlib 3.0 is here with new cyclic colormaps, and convenience methods
Read more
  • 0
  • 0
  • 5372

article-image-faunadb-now-offers-a-managed-serverless-service-combining-faunas-serverless-database-with-a-managed-solution
Vincy Davis
22 Aug 2019
2 min read
Save for later

FaunaDB now offers a “Managed Serverless” service combining Fauna’s serverless database with a managed solution

Vincy Davis
22 Aug 2019
2 min read
Today, FaunaDB, announced the general availability of FaunaDB managed serverless database service. The new service will provide Fauna’s small and medium-sized enterprises (SMEs) and partners with flexibility and a customer-dedicated deployment of FaunaDB.  In a statement, Evan Weaver, CEO of Fauna said, “We are breaking new ground in the industry by offering the first fully managed serverless service, and we now deliver the best of both worlds.” He further adds, “Developers wanting a powerful data management component for cutting-edge app development can use FaunaDB, while companies wanting to avoid hands-on cloud configuration and maintenance can choose our managed serverless offering.” FaunaDB managed serverless is a mature data management solution which will include all the features of FaunaDB. It currently supports Amazon Web Services (AWS) and Google Cloud Platform (GCP), and will come up with support for Azure soon. Its capacity is termed and priced on a monthly or annual basis. The serverless database is assisted by Fauna customer success enterprise support, which will give users access to technical support and customer service. Operational controls delivered by FaunaDB Managed Serverless  Enterprise-grade support and SLAs Change data feed or stream Query log auditing Operational monitoring integration Customer-defined local endpoints Customer-defined data locality Backup and restore tailored to meet compliance needs Isolated environments as needed for development, testing and staging Nextdoor, a private social network, is already using FaunaDB Managed Serverless Database Service. The co-founder and chief architect of Nextdoor, Prakash Janakiraman says, “We selected FaunaDB for its API flexibility and scalability, security and availability to support global use of our mobile app. We are now using the managed service for its flexible configuration options and capabilities such as multiple development environments, change data feed and query log auditing.” Fauna announces Jepsen results for FaunaDB 2.5.4 and 2.6.0 GraphQL API is now generally available After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering
Read more
  • 0
  • 0
  • 3038