Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-allen-institute-of-artificial-intelligence-releases-iconary-an-ai-pictionary-game-which-allows-humans-and-ai-to-play-together
Sugandha Lahoti
06 Feb 2019
3 min read
Save for later

Allen Institute of Artificial Intelligence releases Iconary, an AI Pictionary game which allows humans and AI to play together

Sugandha Lahoti
06 Feb 2019
3 min read
Artificial Intelligence has been climbing on the success of playing difficult classic board games like Chess and Go to complex multiplayer online games like DOTA 2 and StarCraft. Last month, Google DeepMind’s AI AlphaStar defeated StarCraft II pros. Unity also launched an ‘Obstacle Tower Challenge’ to test AI game players. In a similar move, yesterday Allen Institute for Artificial Intelligence released Iconary an AI Pictionary game which allows you to collaborate with an Artificial Intelligence software. It’s not a man vs machine but more of a man and machine collaborative game. Per the researchers behind this game, “Iconary is a breakthrough AI game in that it is the first Common Sense AI game involving language, vision, interpretation and reasoning.” Gameplay Iconary offers players a limited set of icons along with a phrase describing a situation. Players need to use the icon set to compose a scene that represents the phrase and the AllenAI will try to guess it correctly. It can also update its compositions based on its human partner's guesses to help successfully guide them towards the correct phrase. The AI plays both on the drawing side and guessing side. For guessing, the Artificial Intelligence arranges icons and the players have to guess the phrase. There are over 75,000 phrases supported in Iconary, with more being added regularly. However, the astonishing thing is that there are uncountable ways of representing them. This is challenging for an AI system, according to researcher Ani Kembhavi, “because it tests a wide range of common sense skills. The algorithms must first identify the visual elements in the picture, figure out how they relate to one another, and then translate that scene into simple language that humans can understand. This is why Pictionary could teach computers information that other AI benchmarks like Go and StarCraft can’t” The main goal of Iconary is to help AI systems come to an understanding of what humans are asking of it. This will help in overcoming multiple roadblocks in simple tasks by having humans and AI understand complex phrases. The researchers write, “AllenAI has never before encountered the unique phrases in Iconary, yet our preliminary games have shown that our AI system is able to both successfully depict and understand phrases with a human partner with an often surprising deftness and nuance.” You may give Iconary a try at iconary.allenai.org. Introducing SCRIPT-8, an 8-bit JavaScript-based fantasy computer to make retro-looking games Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI powered game development platform
Read more
  • 0
  • 0
  • 1895

article-image-tensorflow-js-architecture-and-applications
Bhagyashree R
05 Feb 2019
4 min read
Save for later

TensorFlow.js: Architecture and applications

Bhagyashree R
05 Feb 2019
4 min read
In a paper published last month, Google developers explained the design, API, and implementation of TensorFlow.js, the JavaScript implementation of TensorFlow. TensorFlow.js was first introduced at the TensorFlow Dev Summit 2018. It is basically the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. Google’s motivation behind creating TensorFlow.js was to bring machine learning in the hands of web developers who generally do not have much experience with machine learning. It also aims at allowing experienced ML users and teaching enthusiasts to easily migrate their work to JS. The TensorFlow.js architecture TensorFlow.js, as the name suggests, is based on TensorFlow, with a few exceptions specific to the JS environment. This library comes with the following two sets of APIs: The Ops API facilitates lower-level linear algebra operations such as matrix, multiplication, tensor addition, and so on. The Layers API, similar to the Keras API, provide developers high-level model building blocks and best practices with emphasis on neural networks. Source: TensorFlow.js TensorFlow.js backends In order to support device-specific kernel implementations, TensorFlow.js has a concept of backends. Currently it supports three backends: the browser, WebGL, and Node.js. The two new rising web standards, WebAssembly and WebGPU, will also be supported as a backend by TensorFlow.js in the future. To utilize the GPU for fast parallelized computations, TensorFlow.js relies on WebGL, a cross-platform web standard that provides low-level 3D graphics APIs. Among the three TensorFlow.js backends, the WebGL backend has the highest complexity. With the introduction of Node.js and event-driven programming, the use of JS in server-side applications has grown over time. Server-side JS has full access to the filesystem, native operating system kernel, and existing C and C++ libraries. In order to support the server-side use cases of machine learning in JavaScript, TensorFlow.js comes with a Node.js backend that binds to the official TensorFlow C API using the N-API. As a fallback, TensorFlow.js provides a slower CPU implementation in plain JS. This fallback can run in any execution environment and is automatically used when the environment has no access to WebGL or the TensorFlow binary. Current applications of TensorFlow.js Since its launch, TensorFlow.js have seen its applications in various domains. Here are some of the interesting examples the paper lists: Gestural Interfaces TensorFlow.js is being used in applications that take gestural inputs with the help of webcam. Developers are using this library to build applications that translate sign language to speech translation, enable individuals with limited motor ability control a web browser with their face, and perform real-time facial recognition and pose-detection. Research dissemination The library has facilitated ML researchers to make their algorithms more accessible to others. For instance, the Magenta.js library, developed by the Magenta team, provides in-browser access to generative music models. Porting to the web with TensorFlow.js has increased the visibility of their work with their audience, namely musicians. Desktop and production applications In addition to web development, JavaScript has been used to develop desktop and production applications. Node Clinic, an open source performance profiling tool, recently integrated a TensorFlow.js model to separate CPU usage spikes by the user from those caused by Node.js internals. Another example is, Mood.gg Desktop, which is a desktop application powered by Electron, a popular JavaScript framework for writing cross-platform desktop apps. With the help of TensorFlow.js, Mood.gg detects which character the user is playing in the game called Overwatch, by looking at the user’s screen. It then plays a custom soundtrack from a music streaming site that matches with the playing style of the in-game character. Read the paper, Tensorflow.js: Machine Learning for the Web and Beyond, for more details. TensorFlow.js 0.11.1 releases! Emoji Scavenger Hunt showcases TensorFlow.js 16 JavaScript frameworks developers should learn in 2019
Read more
  • 0
  • 0
  • 6184

article-image-transformer-xl-a-google-architecture-with-80-longer-dependency-than-rnns
Natasha Mathur
05 Feb 2019
3 min read
Save for later

Transformer-XL: A Google architecture with 80% longer dependency than RNNs

Natasha Mathur
05 Feb 2019
3 min read
A group of researchers from Google AI and Carnegie Mellon University announced the details regarding their newly proposed architecture, called, Transformer-XL (extra long), yesterday. It’s aimed at improving natural language understanding beyond a fixed-length context with higher self-attention. Fixed-length context is a long text sequence truncated into fixed-length segments of a few hundred characters. Researchers have used two methods to quantitatively study the effective lengths of Transformer-XL and the baselines, namely, segment-level recurrence mechanism and a relative positional encoding scheme. Let’s have a look at these key techniques in detail. Segment-level recurrence Recurrence mechanism helps address the limitations of using a fixed-length context. During the training process, the hidden state sequences computed in the previous segment are fixed and cached. These are then reused as an extended context once the model starts processing the next new segment.   Segment level recurrence This connection then further increases the largest possible dependency length by N times (N  being the depth of the network) as contextual information can flow across segment boundaries. The recurrence mechanism also resolves the context fragmentation issue. Moreover, with the help of recurrence mechanism applied to every two consecutive segments of a corpus, a segment-level recurrence is created in the hidden states. This, in turn, helps with effective context being utilized beyond the two segments. Apart from being able to achieve extra long context and resolving the fragmentation issue, recurrence mechanism also helps with significantly faster evaluation. Relative Positional Encodings Although the segment-level recurrence technique is effective, there is a technical challenge that involves reusing the hidden states. The challenge is to keep the positional information coherent while reusing the states. Applying segment-level recurrence does not work in this case does not work as the positional encodings are not coherent when reusing the previous segments. This is where the relative positional encoding scheme comes into the picture to make the recurrence mechanism possible. The relative positional encodings make use of fixed embeddings with learnable transformations instead of learnable embeddings. This makes it more generalizable to longer sequences at test time. The core idea behind the technique is to only encode the relative positional information in the hidden states. “Our formulation uses fixed embeddings with learnable transformations instead of learnable embeddings and thus is more generalizable to longer sequences at test time”, state the researchers. With both the approaches combined, Transformer-XL has a much longer effective context and is able to process the elements in a new segment without any recomputation. Results Transformer-XL obtains new results on a variety of major language modeling (LM) benchmarks. It is the first self-attention model that is able to achieve better results than RNNs on both character-level and word-level language modeling. It is able to model longer-term dependency than RNNs and Transformer. Transformer-XL has the following three benefits: Transformer-XL dependency is about 80% longer than RNNs and 450% longer than vanilla Transformers. Transformer-XL is up to 1,800+ times faster than a vanilla Transformer during evaluation of language modeling tasks as no re-computation is needed. Transformer-XL has better performance in perplexity on long sequences due to long-term dependency modeling, and on short sequences by resolving the context fragmentation problem. For more information, check out the official Transformer XL research paper. Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy Researchers introduce a machine learning model where the learning cannot be proved Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference
Read more
  • 0
  • 0
  • 4531
Visually different images

article-image-slack-confidentially-files-to-go-public
Amrata Joshi
05 Feb 2019
3 min read
Save for later

Slack confidentially files to go public

Amrata Joshi
05 Feb 2019
3 min read
Yesterday, Slack Technologies confidentially filed with the Securities and Exchange Commission to go public in the U.S. for listing its shares publicly. The company would go with a direct listing in the stock market and might get into a race with Lyft, Uber, and Airbnb to become the next major company to use the non-traditional method for an Initial Public Offering (IPO) after Spotify. Last year Spotify decided to sell its shares in an IPO directly to regular people rather than to a pre-chosen group of its bankers’ friends in a move which is known as a direct listing. A company that opts for direct listing doesn’t create or sell any new stock and therefore doesn’t raise any money but current shareholders sell their preexisting shares. Slack had about $900 million in cash on its balance sheet as of October 2018, according to The Information. Last year, in December, Slack hired Goldman Sachs to lead its IPO as an underwriter and was seeking a valuation of more than $10 billion in its IPO as reported by Reuters. According to a report by Crunchbase, Slack has raised about $1 billion so far. The global growth concerns and U.S.-China trade issues have an impact on the equity markets. Many companies have pulled IPOs from the markets, stating “unfavorable economic conditions”, with the number rising since the U.S. government shutdown. It would be interesting to see what step the company takes. According to few users, this move will be beneficial for Slack. One of the comments on HackerNews reads, “Nothing is wrong with the market: Slack may have decided that this is the best way for them to create liquidity. There is also a cap (2000) on the number of shareholders a company can have before they have to abide by what amounts to the same reporting requirements as a publicly traded company. Slack also get the advantage of the usual market pop of acquiring companies share prices that usually amounts to a significant % of the cash value of the transaction.” According to few others, the company will have huge leverage with stock compensation it would be able to buy other companies because of the access to funding. To know more about this news, check out the official press release. Slack has terminated the accounts of some Iranian users, citing U.S. sanctions as the reason Airtable, a Slack-like coding platform for non-techies, raises $100 million in funding Atlassian sells Hipchat IP to Slack
Read more
  • 0
  • 0
  • 2081

article-image-amazon-faces-increasing-public-pressure-as-hq2-plans-go-under-the-scanner-in-new-york
Natasha Mathur
05 Feb 2019
3 min read
Save for later

Amazon faces increasing public pressure as HQ2 plans go under the scanner in New York

Natasha Mathur
05 Feb 2019
3 min read
Andrea Stewart-Cousins, majority leader of the New York State Senate, and the senate democrats, nominated the New York State Senator, Michael Gianaris of Queens to serve on the five-member Public Authorities Control Board (PACB), yesterday. The news, first reported by the NY Times, has stirred up a worry among those who support Amazon’s HQ2 proposal to build a 25,000-person office in New York City (announced last year in November).  This is because Gianaris has been a vocal opponent of Amazon HQ2, and if selected, can veto the state actions on the project. “My position on the Amazon deal is clear and unambiguous and is not changing. It’s hard for me to say what I would do when I don’t know what it is I would be asked to opine on”, said Gianaris. The Amazon HQ2 deal for Long Island City was negotiated by Gov. Andrew Cuomo back in November 2018. “With Amazon committing to expand its headquarters in Long Island City, New York can proudly say that we have attracted one of the largest, most competitive economic development investments in U.S. history,” said Cuomo. He now has a final say over whether to refuse or approve the Senate’s selection. The day after Amazon announced its plans to build its 1.5 million square foot corporate headquarters in Long Island City, Queens, New York City, Gianaris started a protest against Amazon. Gianaris was joined by other New Yorkers who protested against the company’s plan, asking it to be abandoned.   https://twitter.com/SenGianaris/status/1062787029761753088 https://twitter.com/SenGianaris/status/1062693588457394176 Amazon’s new campus is supposed to be located along Long Island City’s waterfront, across the East River from Manhattan’s Midtown East neighborhood. Amazon has promised 50,000 jobs and will take in 25,000 employees with an average wage of $150,000 a year. Moreover, the company will receive at least $2.8 billion in incentives from the state and city and if it passes the goal of 25,000 workers in Long Island City, it could also receive state tax breaks. Gianaris does not approve of this as he believes that spending $2.8 billion in state and city incentives to Amazon is a “bad deal”. https://twitter.com/SenGianaris/status/1063066018694737920 He even went ahead to call it a ‘#Scamazon deal’. https://twitter.com/SenGianaris/status/1090632342719381504 Many people are in favor of Gianaris. According to Stuart Applebaum, President, Retail, Wholesale, and Department Store Union, Gianaris, has “proven himself to be a champion of workers’ rights”: https://twitter.com/RWDSU/status/1092536178073653248 Dani Lever, a spokeswoman for Cuomo, said that the recommendation of Gianaris “puts the self-interest of a flip-flopping opponent of the Amazon project above the state’s economic growth. Every Democratic Senator will now be called on to defend their opposition to the greatest economic growth potential this state has seen in over 50 years”. Amazon launches TLS Termination support for Network Load Balancer Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus Rights groups pressure Google, Amazon, and Microsoft to stop selling facial surveillance tech to government
Read more
  • 0
  • 0
  • 1879

article-image-lawmakers-introduce-new-consumer-privacy-bill-and-malicious-deep-fake-prohibition-act-to-support-consumer-privacy-and-battle-deepfakes
Sugandha Lahoti
05 Feb 2019
4 min read
Save for later

Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes

Sugandha Lahoti
05 Feb 2019
4 min read
Yesterday, a Massachusetts senator filed a consumer privacy bill enabling consumers to sue for privacy invasions. The bill touted to be similar to Californians for consumer privacy (CCPA).  It allows for a private right of action and statutory damages for any violation of the law (not just breaches) and does not require a demonstration of a loss of money or property. Here’s what the bill proposes: Businesses provide consumers with a notice At/Before Collection Right to Delete: A consumer shall have the right to request that a business delete any personal information about the consumer which the business has collected from the consumer. Right to Opt-out of Third Party Disclosure:  A consumer shall have the right, at any time, to demand that a business not disclose the consumer’s personal information to third parties. No Penalty for Exercise of Rights: A business shall not discriminate against a consumer because the consumer exercised any of the consumer’s rights under the bill Private Right of Action: A consumer who has suffered a violation of this bill may bring a lawsuit against the business or service provider that violated this bill. The bill says that “Consumers need not to suffer a loss of money or property as a result of the violation in order to bring an action for a violation." People on Twitter generally had positive sentiments. https://twitter.com/natashanyt/status/1090328524865576961 https://twitter.com/ashk4n/status/1092452492175175680 https://twitter.com/gabrielazanfir/status/1092524077854670851 Last month, Sen. Ben Sasse introduced a bill, “Malicious Deep Fake Prohibition Act” to criminalize the malicious creation and distribution of deepfakes, which are increasingly being used for promoting harassment and illegal activities. Under the bill, it would be illegal for individuals to: (1) Create, with the intent to distribute, a deep fake with the intent that the distribution of the deep fake would facilitate criminal or tortious conduct under Federal, State, local, or Tribal law; or (2) Distribute an audiovisual record with— (A) actual knowledge that the audiovisual record is a deep fake; and (B) the intent that the distribution of the audiovisual record would facilitate criminal or tortious conduct under Federal, State, local, or Tribal law. However, this bill was widely criticized for its loopholes. A statement in the bill states, “Deep fake means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.” A hacker news user said that this, limits the scope of the act to prohibiting deep fakes that are not explicitly labeled as such. Another user said, “If that's the case, any sort of creative editing, even just quick cuts, could fall under this (see: any primetime or cable news, any TV campaign ad, the quick cuts of Obama where it looks like he's singing Never Gonna Give You Up, etc). A law like this could also be weaponized against political foes—basically, label everything you don't like as "fake news" and prosecute it under this law.” Orin Kerr in a blog post comments, “The Sasse bill also has a potential problem of not distinguishing between devices and files. Reading the bill, it prohibits the distribution of an audiovisual record with the intent that the distribution would facilitate tortious conduct.” It is promising to see the lawmakers sincerely taking measures to enable building strict privacy standards. Only time will tell if this new legislation will continue to protect consumer data than the businesses that profit from it. Machine generated videos like Deepfakes – Trick or Treat? Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois
Read more
  • 0
  • 0
  • 3429
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-as-anti-trust-for-big-tech-gains-tractions-in-eu-and-us-india-tightens-the-noose-on-e-commerce-rules-amazon-can-either-be-a-marketplace-or-a-seller-not-both
Melisha Dsouza
04 Feb 2019
4 min read
Save for later

As anti-trust for big tech gains tractions in EU and US, India tightens the noose on e-commerce rules: Amazon can either be a marketplace or a seller, not both

Melisha Dsouza
04 Feb 2019
4 min read
On the 1st of February, the Government of India modified the policy of Foreign Direct Investment (FDI). The rules mandate that foreign investors cannot use their own e-commerce platforms to control and market their own inventory. Their marketplace can only be used by others to sell goods to retail consumers. E-commerce entities will have to “maintain a level playing field” and refrain from directly/indirectly influencing the sale price of goods and services. [box type="shadow" align="" class="" width=""]Also Read: Last year, the EU opened an investigation against Amazon, to understand how the company uses the data it gathers through transactions. The purpose of the probe was to check whether the data collected legitimately, is also used to give Amazon a competitive advantage over the smaller merchants by allowing it to understand the kinds of things people want to buy.[/box] The new amendments to the policy set the following ground rules for online marketplace owners: Prohibit marketplace “owners” from selling products on their own marketplace through vendors in which they hold an equity interest. Prohibits marketplace owners from making deals to sell products exclusively on their platforms. All vendors on the e-commerce platform should be provided services in a “fair and non-discriminatory manner”, including fulfillment, logistics, warehousing, advertisement, and other services. This has sent two of India’s largest online retailers- Amazon.in and Walmart owned Flipkart.com into a frenzy, scrambling to adjust their e-commerce model to accommodate Indian rules. Amazon and Flipkart, both rely on foreign investment to operate in India. With the new rules in place, products have started to be pulled down from Amazon. Smartphones that were launched as “exclusive deals”, Amazon’s range of Echo speakers and other Amazon exclusive goods have disappeared from the site. According to BBC, Clothing from an Indian department store chain -Shopper’s Stop- is also unavailable now. This because Amazon owns 5% of the company. Users will now have to rely on resellers or offline stores for these products, which may have a far-reaching impact on India's e-commerce sector in the long run. Flipkart’s chief of corporate affairs Rajneesh Kumar said in a statement: "We believe that policy should be created in a consultative, market-driven manner and we will continue to work with the government to promote fair, pro-growth policies," What does this mean for small retailers, consumers? Small traders have been alleging that e-commerce giants create an unfair marketplace for them to work in. https://twitter.com/linamkhan/status/1091459166785490944 The Confederation of All India Traders has advised the government to go further by forming a new regulatory authority and a "special investigation team" that looks into the business models of major e-commerce players. https://twitter.com/praveendel/status/1090915891284533249   While traders may be completely in favor of these new rules, the public is abuzz with mixed sentiments. Some citizens have expressed their views on why the rule doesn't make sense https://twitter.com/ashwinmushran/status/1091943979162038272 Many citizens have accused Amazon of not being fair with their pricing structure and product listing policies. This particular user started an interesting thread, sharing his views on how Amazon could in fact help smaller retailers grow. https://twitter.com/AKG1593/status/1092012275119022081 Cloudtail India Pvt Ltd and WS Retail will be amongst the many suffering a huge setback because of this law.  Small retailers can now breathe a sigh of relief as the big guns step out of the picture. The US-India Strategic Partnership Forum (USISPF)  President and Chief Executive Officer (FPO) Mukesh Aghi stated that "it is not the government's business to micromanage businesses" and the rules are "regressive" that would harm consumers, create unpredictability and have a negative impact on the growth of online retail in India. It will be interesting to see how India’s e-commerce marketplace and its consumers cope up with the repercussions of this law. You can head over to BBC for more insights on this news. Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois Amazon open-sources SageMaker Neo to help developers optimize the training and deployment of machine learning models The future of net neutrality is being decided in court right now, as Mozilla takes on the FCC  
Read more
  • 0
  • 0
  • 1682

article-image-grafana-6-0-beta-is-here-with-new-panel-editor-ux-google-stackdriver-datasource-and-grafana-loki-among-others
Natasha Mathur
04 Feb 2019
4 min read
Save for later

Grafana 6.0 beta is here with new panel editor UX, google stackdriver datasource, and Grafana Loki among others

Natasha Mathur
04 Feb 2019
4 min read
Grafana, data visualization & analytics platform, released the beta version of Grafana 6.0, last week. Grafana 6.0 beta explores new features such as Explore, Grafana Loki, Gauge Panel, New panel editor UX, and Google stackdriver datasource among others. Grafana is an open source data visualization and monitoring tool that can be used on top of a variety of different data stores but is commonly used together with Graphite, InfluxDB, Elasticsearch, and Logz.io. Let’s discuss the key highlights in Grafana 6.0 beta. Explore Explore is a new feature in Grafana 6.0 beta that allows you to create a new interactive debugging workflow and helps integrate metrics and logs. The Prometheus query editor in Explore has improved autocomplete, metric tree selector, and integrations with the Explore table view. This allows easy label filtering and offers useful query hints that can automatically apply functions to your query. Also, there is no need to switch to other tools for debugging purposes, since Explore allows you to dig deeper into your metrics and logs to find the bug related cause. Grafana’s new logging datasource, called, Loki is also tightly integrated into Explore, enabling you to correlate metrics and logs by viewing them side-by-side. Explore supports splitting the view, allowing you to easily compare different queries, datasources, metrics and logs. Grafana Loki The log exploration and visualization features in Explore are available in any data source but have been currently implemented only by the new open source log aggregation system from Grafana Lab, called Grafana Loki. Grafana Loki is a horizontally-scalable, highly-available, and multi-tenant log aggregation system inspired by Prometheus. It is very cost effective as it does not index the contents of the logs but a set of labels for each log stream. The logs from Loki gets queried in a similar way to querying with label selectors in Prometheus. Loki makes use of labels to group log streams which can be made to match up with your Prometheus labels. New Panel Editor Grafana beta 6.0 has a new, redesigned UX around editing panels. The new panel editor lets you resize the visualization area in case the user wants more space for queries and options. It also allows you to change visualization (panel type) from within the new panel edit mode, hence, eliminating the need to add a new panel to try out different visualizations. Azure Monitor Datasource The Grafana team worked on developing an external plugin for Azure Monitor last year and it is now being moved into Grafana to be one of the built-in datasources. As a core datasource, the Azure Monitor datasource will be getting the alerting support for the official Grafana 6.0 release. The Azure Monitor datasource integrates four different Azure services with Grafana, namely, Azure Monitor, Azure Log Analytics, Azure Application Insights, and Azure Application Insights Analytics. Other changes Grafana 6.0 beta comes with a new and separate Gauge panel. Gauge Panel contains a new threshold editor that the team plans to refine and use in other panels. Built-in support for Google Stackdriver has been officially released in Grafana 6.0 beta. Grafana 6.0 beta comes with newly added support for provisioning alert notifiers from configuration files. This feature allows operators to provision notifiers without using the UI or the API. A new field called uid (string identifier) has been added that the administrator can set themselves. The ElasticSearch datasource in Grafana 6.0 beta now supports bucket script pipeline aggregations. This allows it to do per bucket computations such as the difference or ratio between two metrics. The color picker has been updated in Grafana to show named colors and primary colors. This will improve accessibility and will make colors more consistent across dashboards. For more information, check out the official Grafana 6.0 beta release notes. Grafana 5.3 is now stable, comes with Google Stackdriver built-in support, a new Postgres query builder Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Tumblr open sources its Kubernetes tools for better workflow integration
Read more
  • 0
  • 0
  • 3329

article-image-facebook-faces-multiple-data-protection-investigations-in-ireland
Sugandha Lahoti
04 Feb 2019
3 min read
Save for later

Facebook faces multiple data-protection investigations in Ireland

Sugandha Lahoti
04 Feb 2019
3 min read
Facebook is facing seven separate data protection investigations in Ireland, as reported by Bloomberg. Facebook’s investigations are a part of 16 cases which major tech companies like Twitter, Apple, LinkedIn, and also Facebook’s WhatsApp and Instagram, are facing. The main aim of these probes is to scale up the level of fines that regulator’s issue under GDPR. Currently, GDPR allows penalties as large as 4 percent of a company’s annual revenue. According to Ireland’s data protection commissioner, Helen Dixon, “These data protection probes are centered on the activities of very big internet companies with tens and hundreds of millions of users.” The first EU probe against Facebook was opened by Ireland following a security breach that compromised 50M accounts in October last year. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. That second probe was initiated by Dixon’s office in December when a photo API bug affected people who used Facebook Login and granted permission to third-party apps to access their photos. This bug gave outside developers broader access to users’ photos affecting up to 6.8 million users and up to 1,500 apps built by 876 developers. Per Dixon, “Other breach notifications received in my office since May 25 are related to coding errors, which leads to posts being made public that should have been private, or in a major breach. No company seems to be immune from this.” Dixon mentions that the deciding cases are not trivial “We’re at various concrete stages in all of them, but they’re all substantially advanced,” she said. “The soonest I am going to see an investigation report on my desk, which is when my role kicks in” The final decisions on these sanctions are likely to be made in June or July. Last week, U.S. District Judge Vince Chhabria overruled Facebook’s argument that it cannot be sued for letting third parties access users’ private data because no “real world” harm has resulted from the conduct. Last month, Russia’s popular watchdog, Roskomnadzor said that it opened a civil case against Twitter and Facebook for failing to explain how they plan to comply with local data laws. At the same time, the Federal Trade Commission (FTC) officials also planned to impose a fine of over $22.5 billion on Facebook post a year of data breaches and revelations of illegal data sharing.  A U.S. Senator also introduced a bill titled ‘American Data Dissemination (ADD) Act’ for creating federal standards of privacy protection for big companies like Facebook. “Companies are lawyering up and we’re typically dealing with more litigators and lawyers on the side of any inquiry that we conduct,” Dixon said. “It does show the power that they have in terms of the size. But we have all the cards in terms of the powers to investigate, to compel and ultimately to conclude and make findings.” Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program. Stanford experiment results on how deactivating Facebook affects social welfare measures Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers
Read more
  • 0
  • 0
  • 1897

article-image-researchers-at-columbia-university-use-deep-learning-to-translate-brain-activity-into-words-for-epileptic-cases
Savia Lobo
01 Feb 2019
2 min read
Save for later

Researchers at Columbia University use deep learning to translate brain activity into words for epileptic cases

Savia Lobo
01 Feb 2019
2 min read
Researchers at Columbia University have carried out a successful experiment where they translated brain activity into words using deep learning and speech synthesizer. They made use of the Auditory stimulus reconstruction technique which combines the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. They temporarily placed five electrodes in the brains of five people who were about to undergo a brain surgery for epilepsy. These five were asked to listen to recordings of sentences, and their brain activity was used to train deep-learning-based speech recognition software. Post this they were made to listen to 40 numbers being spoken. The AI then tried decoding what they heard based on the brain activity and spoke out the results in a robotic voice. According to the ones who heard the robot voice, the voice synthesizer produced was understandable as the right word 75% of the time.   Source: Nature.com According to the Technology Review, “At the moment the technology can only reproduce words that these five patients have heard—and it wouldn't work on anyone else.” However, the researchers believe  that such a technology could help people who have been paralyzed communicate with their family and friends, despite losing the ability to speak. Dr. Nima Mesgarani, an associate professor at Columbia University, said “One of the motivations of this work…is for alternative human-computer interaction methods, such as a possible interface between a user and a smartphone.” According to the report, “Our approach takes a step toward the next generation of human-computer interaction systems and more natural communication channels for patients suffering from paralysis and locked-in syndromes.” To know more about this experiment, head over to the complete report. Using deep learning methods to detect malware in Android Applications Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others
Read more
  • 0
  • 0
  • 2755
article-image-google-news-initiative-partners-with-google-ai-to-help-deep-fake-audio-detection-research
Amrata Joshi
01 Feb 2019
2 min read
Save for later

Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research

Amrata Joshi
01 Feb 2019
2 min read
As speech synthesis technology has advanced a lot in recent years and with neural networks from DeepMind creating realistic, human-like voices, Google is working in the same direction to advance state-of-the-art research on fake audio detection. Google Maps and Google Home use Google's speech synthesis, or text-to-speech (TTS) technology. The Google News Initiative (GNI) announced last year that it wanted to tackle “deep fakes” and other systems that try to bypass voice authentication systems. Yesterday, Google AI and Google News Initiative (GNI) partnered together for creating a body of synthetic speech containing thousands of phrases spoken by its deep learning text-to-speech (TTS) models. It contains 68 synthetic voices from a large variety of regional accents from English newspaper articles. Malicious actors can synthesize speech in order to fool voice authentication systems, or they can even create forged audio recordings to defame public figures. Deep fakes, audio or video clips generated by deep learning models can be exploited for manipulating trust in media. It then becomes difficult to distinguish real from tampered content. And the bad actors can also claim that authentic data is fake. Because of this issue, there was a need for synthetic speech database. This effort is also in the direction of Google’s AI Principles to ensure “strong safety practices to avoid unintended results that create risks of harm.” Currently, this dataset is available for participants of the 2019 ASVspoof challenge for creating countermeasures against fake speech. The aim is to make the automatic speaker verification (ASV) systems more secure. ASVspoof participants can develop systems that learn to distinguish between the real and computer-generated speech by training models on both. The results for this challenge will be announced in September at the 2019 Interspeech conference in Graz, Austria. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available You can now publish PWAs in the Google Play Store as Chrome 72 for Android ships with Trusted Web Activity feature Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club
Read more
  • 0
  • 0
  • 3873

article-image-the-collections-2-5-leak-of-2-2-billion-email-addresses-might-have-your-information-german-news-site-heise-reports
Amrata Joshi
01 Feb 2019
3 min read
Save for later

The Collections #2-5 leak of 2.2 billion email addresses might have your information, German news site, Heise reports

Amrata Joshi
01 Feb 2019
3 min read
In recent years, hackers have breached companies like Dropbox and LinkedIn by stealing 71 million and 117 million passwords, respectively. This month, Troy Hunt, security researcher identified the first portion of the data dump, named Collection #1, which has a set of breached databases. He represented 773 million unique usernames and passwords. Other researchers have now obtained and analyzed an additional vast database called Collections #2–5. It has 845 gigabytes of stolen data and 25 billion records in all. https://twitter.com/SeanWrightSec/status/1091262248914505730 German news site Heise reported that Collection of 2.2 billion unique usernames and associated passwords has been distributed on hacker forums and torrents. According to the researchers at the Hasso Plattner Institute, 611 million credentials in Collections #2–5 weren’t included in the Collection #1 database. Chris Rouland, a cybersecurity researcher and founder of the IoT security firm Phosphorus.io, who also pulled Collections #1–5 from torrented files, said, "This is the biggest collection of breaches we’ve ever seen." According to Rouland, as the collection has already been circulated amongst hackers, the tracker file which he downloaded was being seeded by more than 130 people who possessed the data dump. It has also been downloaded more than 1,000 times. In a statement to WIRED, Rouland said, "It's an unprecedented amount of information and credentials that will eventually get out into the public domain." According to WIRED, most of the stolen data appears to come from previous thefts, like the breaches of LinkedIn, Yahoo, and Dropbox. WIRED has examined a sample of the data and further confirmed that the credentials are valid, but mostly represent passwords from the previous years’ data leaks. This collection could be used as a powerful tool for unskilled hackers as they can try a technique called credential stuffing. With this technique, users can try previously leaked usernames and passwords on any website with the hope that people have reused passwords. Rouland said, "For the internet as a whole, this is still very impactful." Who knows if we are targeted too? What should one do? Users can check for their usernames in the breach using Hasso Plattner Institute's tool. This identity leak checker asks for users’ email address then uses that email ID to generate a list of information including users’ name, IP address, and password, if applicable. It tells the users if a password has been matched to their email address. It can also tell how recent that password actually is. One should change passwords for any breached sites it flags. It is advisable to not reuse passwords, and use a password manager. A password manager can automatically generate unique, secure passwords for the services a user uses. Users should turn on the two-factor authentication wherever possible. Though the two-factor authentication isn’t foolproof, it provides a layer of security.   Troy Hunt's service HaveIBeenPwned helps in checking if the passwords have been compromised, though it doesn't yet include Collections #2-5. Internal memo reveals NASA suffered a data breach compromising employees social security numbers Former Senior VP’s take on the Mariott data breach; NYT reports suspects Chinese hacking ties Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report
Read more
  • 0
  • 0
  • 4148

article-image-rigetti-computing-launches-public-beta-of-its-first-quantum-cloud-services-platform
Natasha Mathur
01 Feb 2019
3 min read
Save for later

Rigetti Computing launches Public beta of its first Quantum Cloud Services platform

Natasha Mathur
01 Feb 2019
3 min read
Rigetti Computing, a popular startup within the quantum computing space, launched the public beta of its Quantum Cloud Services (QCS) platform. It was last year in September when Rigetti had announced the details regarding its first Quantum Cloud Services platform. “Quantum Cloud Services platform is the fastest quantum computing platform available today. We’ve eliminated much of the overhead associated with the exchange between quantum and classical compute, resulting in up to a 30x improvement in program runtime over web API models”, says Peter Karalekas, Quantum Software Engineer at Rigetti Computing. The new QCS platform comes with an all-new access model for quantum programming which is centered around an integrated cloud architecture. QCS offers developers access to Rigetti’s quantum processors and the classical computing resources that are necessary for building and testing the quantum algorithms on the platform. Once users have registered on the platform, they can access their own dedicated Quantum Machine Image that comes with preloaded tools necessary to build quantum programs (such as pyQuil and quantum simulator). Rigetti team has also deployed two Aspen QPUs to the QCS platform that can be booked via an online reservation system available in the new QCS web dashboard. Moreover, all Beta users will receive $5,000 in credits for running programs on the QPU during their first month. According to Betsy Masiello, VP Product at Rigetti, the company is not only making QCS available but are also opening up access to QCS Developer Partner applications i.e. the first set of applications built by Rigetti’s Developer Partners. These applications include QCompress, QClassify, QuantumFreeze, and Quantum Feature Detector. Apart from QCS developer partners, there are more than 30 leading scientists from around the world who have signed on themselves as QCS Research Partners. These scientists have worked across different domains such as characterizing and benchmarking quantum hardware, along with computational research across biology, chemistry, and machine learning. Moreover, these research partners get to publish their results, share their data, code, as well as open-source the tools and libraries that they create on the QCS platform. For more details, check out the Rigetti Computing official website. Quantum Computing is poised to take a quantum leap with industries and governments on its side. Did quantum computing just take a quantum leap? A two-qubit chip by UK researchers makes controlled quantum entanglements possible The US to invest over $1B in quantum computing, President Trump signs a law
Read more
  • 0
  • 0
  • 1321
article-image-google-cloud-firestore-the-serverless-nosql-document-database-is-now-generally-available
Sugandha Lahoti
01 Feb 2019
2 min read
Save for later

Google Cloud Firestore, the serverless, NoSQL document database, is now generally available

Sugandha Lahoti
01 Feb 2019
2 min read
Google’s Cloud Firestore is Google’s serverless NoSQL document database used for storing, syncing, and querying data for web, mobile, and IoT applications. It is integrated with both Google Cloud Platform (GCP) and Firebase, Google’s mobile development platform. It is now generally available. Apart from this, Cloud Firestore is now available in 10 new locations, making the total region count as 13 with a significant price reduction for regional instances. Firestore had a single location when it was launched and added two more during the beta. Cloud Firestore is now available in 13 regions When in beta, Cloud Firestore allowed developers to only use multi-region instances, which were sometimes more expensive and not required by every app. With this launch, Google is giving developers the option to run their databases in a single region. There is a significant price reduction with as low as 50% of multi-region instance prices. New Cloud Firestore pricing takes effect from March 3, 2019, for most regional instances. Cloud Firestore’s SLA (Service Level Agreement) is also available. 99.999% is available for multi-region instances and 99.99% is available for regional instances. With Stackdriver integration (in beta), Cloud Firestore users can monitor read, write and delete operations in near-real time with a new "Usage" tab in the Firebase console. For the next release, Google is working on adding new features including querying for documents across collections and incrementing database values without needing a transaction. Existing Cloud Datastore users will be live-upgraded to Cloud Firestore automatically later in 2019. Netizens are generally happy about this release. https://twitter.com/puf/status/1091030237117206529 A comment on hacker news reads, “Been loving Firestore! It has been my first real experience w/ NoSQL in an MVP to production-ready quickly. It's been SO easy to experiment with and learn. Community has been great.” Google Cloud releases a beta version of SparkR job types in Cloud Dataproc 4 key benefits of using Firebase for mobile app development Build powerful progressive web apps with Firebase
Read more
  • 0
  • 0
  • 3160

article-image-apple-revoked-facebook-developer-certificates-due-to-misuse-of-apples-enterprise-developer-program-google-also-disabled-its-ios-research-app
Savia Lobo
31 Jan 2019
3 min read
Save for later

Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app

Savia Lobo
31 Jan 2019
3 min read
Facebook employees are experiencing turbulent times as Apple decides to revoke the social media giant’s developer certificates. This is due to a TechCrunch report that said Facebook paid 20$/month to users including teens to install the ‘Facebook research app” on their devices which allowed them to track their mobile and web browsing activities. Following the revoke, Facebook employees will not be able to access early versions of Facebook apps such as Instagram and Messenger, and many other activities such as food ordering, locating an area on the map and much more. Yesterday, Apple announced that they have shut down the Facebook research app for iOS. According to Apple, “We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple”. The company further said, “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Per Mashable report, “Facebook employees argued that Apple's move was merely an attempt to distract from an embarrassing FaceTime bug that went public earlier in the week.” An employee commented, “Anything to take the heat off the FaceTime security breach.” Facebook also said that  it’s “working closely with Apple to reinstate our most critical internal apps immediately.” Mark Zuckerberg has also received a stern letter from Senator Mark Warner including a list of questions about the company’s data gathering practices, post the TechCrunch report went viral. In a statement, he mentioned, “It is inherently manipulative to offer teens money in exchange for their personal information when younger users don’t have a clear understanding of how much data they’re handing over and how sensitive it is.” Google disabled its iOS app too Similar to Facebook, Google too distributed a private app, Screenwise Meter, to monitor how people use their iPhones and rewarded the users with Google’s Opinion Rewards program gift cards in exchange for collecting information on their internet usage. However, yesterday, Google announced that it has disabled the iOS app. Google’s Screenwise Meter app has been a part of a program that’s been around since 2012. It first started tracking household web access through a Chrome extension and a special Google-provided tracking router. The app is open to anyone above 18 but allows users aged 13 and above to join the program if they’re in the same household. Facebook’s tracking app, on the other hand, targeted people between the ages of 13 and 25. A Google spokesperson told The Verge, “The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program — this was a mistake, and we apologize. We have disabled this app on iOS devices. This app is completely voluntary and always has been. We’ve been upfront with users about the way we use their data in this app, we have no access to encrypted data in apps and on devices, and users can opt out of the program at any time.” To know more about this news, head over to The Verge. Facebook researchers show random methods without any training can outperform modern sentence embeddings models for sentence classification Stanford experiment results on how deactivating Facebook affects social welfare measures Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports
Read more
  • 0
  • 0
  • 2848