Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-facebook-under-criminal-investigations-for-data-sharing-deals-nyt-report
Fatema Patrawala
14 Mar 2019
3 min read
Save for later

Facebook under criminal investigations for data sharing deals: NYT report

Fatema Patrawala
14 Mar 2019
3 min read
Investigations into Facebook's data handling keep piling up. Yesterday, The New York Times reported about the federal prosecutors being in the midst of a criminal investigation into the data deals Facebook arranged with tech companies. It's not known when the investigation began or what the focus is, but a New York grand jury reportedly used subpoenas to obtain records from two or more "prominent makers of smartphones." The deals included heavyweights like Apple, Microsoft and Sony. The grand jury inquiry was on behalf of the US attorney's office for the Eastern District of New York. Facebook had data-sharing arrangements with more than 150 companies, according to a December report in the New York Times. The deals helped Facebook gain more users, and its partners were able to access user data without obtaining consent. Many of the partnerships ended years ago, but the deals with Amazon and Apple were ongoing at the time of the story. These deals typically revolved around making it easier to fill out contacts, share content and otherwise integrate Facebook with devices and websites. There's a concern that these deals weren't always transparent to everyday users. Microsoft's Bing deal mapped the friends of Facebook users without explicit permission. Privacy advocates said the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C., stemming from allegations that the company had shared data in ways that deceived consumers. F.T.C. officials, who spent the past year investigating whether Facebook violated the 2011 agreement, are now weighing the sharing deals as they negotiate a possible multibillion-dollar fine which would be the largest penalty ever imposed by any trader. To add to the worry, the 2020 presidential candidate Elizabeth Warren proposed last week that Facebook and other big tech giants should be broken up. That was the latest in a rising tide of calls for regulation and antitrust action against Silicon Valley. Repeatedly in the past couple of years we have seen Facebook facing a widening inquiry from the US federal government as well as from governments of other nations. The federal agencies and the Department of Justice have been looking into how the political-consulting firm Cambridge Analytica obtained the personal data of up to 87 million Americans. The company has been juggling a number of scandals and investigations since then. Currently it is also facing a longest outage of its services and applications due to an unknown technical issue since yesterday. Facebook confirmed the investigations in a statement. “We are cooperating with investigators and take those probes seriously,” a Facebook spokesman told the Times on Wednesday. “We’ve provided public testimony, answered questions and pledged that we will continue to do so.” Read the detailed coverage on the NY Times blog. Facebook family of apps hits 14 hours outage, longest in its history Facebook deletes and then restores Warren’s campaign ads after she announced plans to break up Facebook Facebook open-sources homomorphic hashing for secure update propagation
Read more
  • 0
  • 0
  • 2530

article-image-perfecto-introduces-perfecto-codeless-a-codeless-testing-solution-based-on-ai
Amrata Joshi
14 Mar 2019
3 min read
Save for later

Perfecto introduces Perfecto Codeless, a codeless testing solution based on AI

Amrata Joshi
14 Mar 2019
3 min read
Just two days ago, Perfecto, a Perforce Software company introduced Perfecto Codeless, an AI-driven codeless testing solution for eliminating the need for coding skills in the dev testing process. Perfecto Codeless will help the development teams to automate the process of writing test scripts as it comes with machine learning (ML) capabilities. The scripts will be allowed to run continuously and fix themselves without disrupting operations. Eran Yaniv, Founder, and CEO at Perfecto wrote to us in an email, “Across our customer base, the number one cause of automation failure is scripting issues. This a huge barrier for achieving good test automation and teams making their way towards continuous testing. With the introduction of Perfecto Codeless, we are harnessing the power of machine learning to offer the next generation of codeless automation testing capabilities. By eliminating the need to write and maintain test scripts, teams save time and can focus on more complex tasks.” The development teams have the coding skills for writing Selenium or Appium scripts but it is better if their time is spent on product development and innovation. Perfecto Codeless comes with tools that help the teams to quickly generate quality test scripts and maintain them. Features of Perfecto Codeless Smart capabilities to maintain scripts Perfecto’s ML capabilities address object maintenance issues within the code. If there is a need for deleting, moving or changing the code, Perfecto Codeless makes it happen agnostically without delaying the process. Interconnected components All the components of the testing process are connected right from creation to execution and analysis. Codeless automation in the cloud Perfecto Codeless provides codeless test automation in the cloud that allows teams to manage the pace and demands that come with test automation. Perfecto Codeless also provides the flexibility, performance, and scalability needed to ensure quality throughout the SDLC. Eran Kinsbruner, Chief Evangelist at Perfecto wrote to us in an email, “In recent years, codeless test automation has become a top priority for testers, as well as the developers that aim to expedite their test creation and maximize testing reliability. These professionals are looking at codeless as a preferred solution to embed into their testing responsibilities. Perfecto Codeless will take DevOps to the next level, relieving testers and developers of the time-intensive responsibility of coding and giving them time back to focus on product development and innovation to help accelerate the software delivery lifecycle (SDLC) for their business.” Rachel Batish’s 3 tips to build your own interactive conversational app DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ Waymo to sell its 3D perimeter LIDAR sensors to companies outside of self-driving  
Read more
  • 0
  • 0
  • 1725

article-image-facebook-family-of-apps-hits-14-hours-outage-longest-in-its-history
Fatema Patrawala
14 Mar 2019
3 min read
Save for later

Facebook family of apps hits 14 hours outage, longest in its history

Fatema Patrawala
14 Mar 2019
3 min read
The biggest interruption ever suffered by Facebook goes beyond 14 hours at a stretch. Twitter was flooded with tweets about Facebook, Instagram and Whatsapp been down intermittently in some parts of the world on all of Wednesday. Facebook itself had to turn to its rival Twitter to explain that its group of hugely popular apps are having difficulties. Some users of Facebook and other platforms owned by the tech giant — including Instagram, Messenger and WhatsApp — reported problems accessing the services and posting content. According to DownDetector, it looks like the outages are mainly in New England; Texas; Seattle, Washington; parts of Latin America, including Peru; the UK; India; Japan; Malaysia and the Philippines. Users have written in from Canada, Las Vegas, and Turkey to confirm outages there as well. The outage caused a bigger hit to the revenue of advertisers on Facebook that spend large amounts of money to reach potential customers on Facebook platforms. The Facebook spokesperson says we are investigating the possibility of refunds to the advertisers. The cause of the interruption has not yet been made public. "We're aware that some people are currently having trouble accessing the Facebook family of apps," tweeted from the official Facebook account. "We're working to resolve the issue as soon as possible." In response to rumours posted on other social networks, the company said the outages were not a result of a Distributed Denial of Service attack, known as DDoS - a type of cyber-attack that involves flooding a target service with extremely high volumes of traffic. The last time Facebook had a disruption of this magnitude was in 2008, when the site had 150m users - compared to around 2.3bn monthly users today. Users funnily turn to Twitter in absence of Facebook and Instagram While Facebook and Instagram have been down, many have turned to Twitter to make jokes about the outage. The hashtags #FacebookDown and #InstagramDown have been used more than 150,000 times so far. Some Twitter users who work in "Facebook-centric" jobs, expressed their panic and distress at being unable to use the platform. Many shared jokes about the social media outage leading to the collapse of society, as "nobody remembers how to reach loved ones or eat food without posting updates". Many tweeted about Facebook users tweeting for the first time. https://twitter.com/slaylegend_13/status/1106049260288499712 Others have shared a version of the "distracted boyfriend" meme, referencing people turning to Twitter in their hour of need. https://twitter.com/Ahmadridhopp_/status/1106018690217107456 Some joked that the lack of access to Facebook would deprive them of validation. https://twitter.com/Jayanliyanage2/status/1106040536866148353 Apart from Twitter there are interesting reactions from users on Hacker News which goes like a sudden increase in worker productivity, a brief glimpse into your neighbor's vacation story, or Facebook launching tools to manage spending time on social media, etc. It's been a good couple of hours that Facebook tweeted about resolving the issue. The company has provided no further updates in the fourteen hours since then. Facebook open-sources homomorphic hashing for secure update propagation Facebook announces ‘Habitat’, a platform for embodied Artificial Intelligence research UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices    
Read more
  • 0
  • 0
  • 3089
Visually different images

article-image-cloudflare-raises-150m-with-franklin-templeton-leading-the-latest-round-of-funding
Amrata Joshi
13 Mar 2019
4 min read
Save for later

Cloudflare raises $150M with Franklin Templeton leading the latest round of funding

Amrata Joshi
13 Mar 2019
4 min read
After a long break from fundraising, yesterday Cloudflare, a U.S. based company that provides content delivery network services, Internet security, etc, announced that it raised $150 million of funding. The company also announced the joining of Stan Meresman, board member and chair of the Audit Committee of Guardant Health (GH) and Maria Eitel, founder and co-chair of the Nike Foundation as the board of directors. In 2014, Cloudflare raised around $110 million funding and the company has raised more than $330 million till date from investors including New Enterprise Associates, Union Square Ventures, Microsoft, Baidu, and many more. During the latest round of funding Franklin Templeton, an investment management company joined these investors and further extending its support to Cloudflare’s growth. Matthew Prince, co-founder and CEO of Cloudflare, said, “I’m honored to welcome Maria and Stan to our board of directors. Both of them bring a wealth of knowledge and experience to our board and know what it takes to propel companies forward. Our entire board looks forward to working with them as we continue to help build a better Internet.” Eitel has previously run European corporate affairs for Microsoft and worked in media affairs at the White House, and also had been an assistant to President George H.W. Bush. Eitel said, “My career has been focused on creating global change, and the Internet is a huge part of that. The Internet has the ability to unleash human potential, and I believe that Cloudflare is one of the major players able to drive the change that’s necessary for the world and Internet community.” Stan Meresman was previously CFO of Silicon Graphics (SGI) and Cypress Semiconductor (CY). He said, “Cloudflare’s technologies, customer base, and global network have helped propel the company to a position of leadership in the Internet ecosystem. I look forward to lending my skills and expertise to Cloudflare’s board in order to continue this growth and make even more of an impact.” According to a report by Reuters, last year, Cloudflare was considering an IPO in the first half of 2019, that could have valued the company more than $3.5 billion. According to this latest funding round, it seems that the company isn’t yet in the direction of going public, but Cloudflare is growing and public offering could possibly be the next big step. Few users are expecting the company to go public this year and are happy that the company is moving in a good direction. One of the users commented on HackerNews, “I do wonder how people feel about this internally though. There's a lot of expectation that the company would go public this year (and some even expected it would go public last year). Hopefully, no one needs the money they put in to early exercise any time soon!” Another comment reads, “Cloudflare is undergoing a lot of big projects to break away from the image that they are "just a CDN". Raising a round now instead of going public allows them to invest more on those projects instead of focusing on quarter to quarter results. Also, avoiding brain-drains post-IPO while they need those talents the most.” Few others think that the company might start monetizing over the data flow. A user commented, “Doesn't raising this kind of money scream that you're eventually going to start to monetize the data flowing through your network (e.g. telecoms selling location data to bounty hunters)?” To know more about this news, check out the official announcement. Cloudflare takes a step towards transparency by expanding its government warrant canaries workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android  
Read more
  • 0
  • 0
  • 2751

article-image-russian-government-blocks-protonmail-services-for-its-citizens
Savia Lobo
13 Mar 2019
4 min read
Save for later

Russian government blocks ProtonMail services for its citizens

Savia Lobo
13 Mar 2019
4 min read
Yesterday, ProtonMail, reported that the Russian government has blocked Russian citizens from sending any messages to the encrypted email provider. This block was issued by the Federal Security Service via a secret letter dated 25 February. According to the letter, the Russian intelligence agency ordered two of the largest Internet service providers in Russia, MTS and Rostelecom, to block traffic from Russia going to our mail servers, thus preventing Russian mail servers from communicating with ProtonMail. On Monday, March 11, a firm called TechMedia obtained a copy of the letter and published it on the Russian tech blogging platform, Habr. The blog also accused ProtonMail and several other email companies of facilitating fake bomb threats. During late January, the Russian police received several anonymous bomb threats via email, which led to a forced evacuation of government buildings schools, rail stations, shopping centers, and offices. A total of 26 IP addresses were blocked after the order was issued. This also included servers used to scramble the final connection for Tor users. “Internet providers were told to implement the block “immediately,” using a technique known as BGP blackholing, a way that tells internet routers to simply throw away internet traffic rather than routing it to its destination”, TechCrunch reports. ProtonMail chief executive Andy Yen, said, “ProtonMail is not blocked in the normal way, it’s actually a bit more subtle. They are blocking access to ProtonMail mail servers. So Mail.ru — and most other Russian mail servers — for example, is no longer able to deliver email to ProtonMail, but a Russian user has no problem getting to their inbox.” The two ProtonMail servers listed by the order are its back-end mail delivery servers, rather than the front-end website that runs on a different system. Yen said, “The wholesale blocking of ProtonMail in a way that hurts all Russian citizens who want greater online security seems like a poor approach”. He further added, “We have also implemented technical measures to ensure continued service for our users in Russia and we have been making good progress in this regard. If there is indeed a legitimate legal complaint, we encourage the Russian government to reconsider their position and solve problems by following established international law and legal procedures.” According to the ProtonMail blog, “Due to the timing of the block, some ProtonMail users in Russia suspect that the block may be related to the mass protests this past weekend in Russia where 15,000 people took to the streets to protest for more online freedom.” Meanwhile, ProtonMail has listed down a few recommendations for its Russians users, which include: 1) Using a VPN service as this allows most blocks to be circumvented. All ProtonMail users also have access to ProtonVPN, a free VPN service that the email provider operates on. 2) Encouraging other contacts to use ProtonMail. The blocks attempted by the Russian government do not and cannot impact communications between ProtonMail accounts in Russia. 3) Complain to MTS and Rostelecom. According to ProtonMail, if enough people complain, these ISPs and the Russian government may reconsider their approach. One of the users wrote on HackerNews, “This situation really pissed me off. FSB (Russian FBI) had problems with receiving bomb threats coming from Protonmail addresses. So, they secretly ordered (with an almost classified order) major ISPs to block Protonmail bypassing Russian's existing website/IP addresses blocking scheme.” To know more about this news in detail, visit ProtonMail’s official blog post. ProtonMail shares guidelines to help organizations achieve EU GDPR compliance Hackers claim to have compromised ProtonMail, but ProtonMail calls it ‘a hoax and failed extortion attempt’ A security researcher reveals his discovery on 800+ Million leaked Emails available online
Read more
  • 0
  • 0
  • 3203

article-image-facebook-deletes-and-then-restores-warrens-campaign-ads-after-she-announced-plans-to-break-up-facebook
Sugandha Lahoti
13 Mar 2019
4 min read
Save for later

Facebook deletes and then restores Warren’s campaign ads after she announced plans to break up Facebook

Sugandha Lahoti
13 Mar 2019
4 min read
Facebook has removed several ads placed by 2020 presidential hopeful, Senator Elizabeth Warren, that called for the breakup of Facebook and other tech giants. Last week, Warren announced that if elected president in 2020, her administration will make big, structural changes to the tech sector to promote more competition by breaking up competition killing big mergers. The ads being deleted from Facebook was first revealed by Politico. The advertisements read, “Three companies have vast power over our economy and our democracy. Facebook, Amazon, and Google. We all use them. But in their rise to power, they’ve bulldozed competition, used our private information for profit, and tilted the playing field in their favor.” Source: Politico The ads were taken down and a message was displayed stating, “This ad was taken down because it goes against Facebook's advertising policies.” However, later a Facebook spokesperson confirmed to Politico that the company is in the process of restoring them. “We removed the ads because they violated our policies against the use of our corporate logo," the spokesperson said. "In the interest of allowing robust debate, we are restoring the ads.” Elizabeth Warren also tweeted about this development, stating that she wants a social media marketplace that isn't dominated by a single censor. https://twitter.com/ewarren/status/1105256905058979841 This news sparked a massive discussion on Hacker News. People called it a win-win situation for Warren. A comment on Hacker News reads, “This is smart politics. Rather than simply telling people FB is a monopoly, she runs a limited experiment that had it been left alone, would have limited effect since the budget was so small ($100). Now, this puts FB in a bind. If they really are a middleman for content, then these ads don't violate any laws and shouldn't be blocked. However, FB as a company with a product should block it just like a coffee shop wouldn't allow a banner on the wall saying "better coffee down the street". Another user appreciated Warren for her smart move. “It’s very smart, her ad campaign people must have known misusing the Facebook logo would get them denied, but now she gets press for being the victim. You or I couldn’t run those ads, this is special treatment for her”, the comment states. A user also condemned Facebook for its dumb move. “I would say this is ordinary politics, an obviously calculated provocation we see every day. What is surprising is a such dumb FB reaction. Why such intelligent people make such dumb moves? Are they really that arrogant? Maybe that arrogance is caused by revenue increases after all these scandals. Maybe they treated politicians, in the same way, many times, but in other countries.” It looks like Warren has won the golden ticket to US presidential elections 2020 as comments like ‘Slay Queen!’ and ‘My Cherokee Princess’ dominated Twitter. Warren’s plan is by far one of the biggest tech regulation plan proposed so far in the 2020 presidential cycle. Other Democrats running for the 2020 presidential bid include senator Kamala Harris, Congresswoman Tulsi Gabbard, entrepreneur Andrew Yang, governor Jay Inslee, and Senator Bernie Sanders. Most of them are also keen on tech regulation. Andrew Yang described Warren's anti-monopoly position as "unimaginative" and "retrograde," yet he does believe in taxing tech. Yang says because artificial intelligence is destroying jobs, the tech industry should pay for a universal basic income. However, Klobuchar and Yang's messages didn't excite people as much as Warren's bold move did. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws UK lawmakers publish a report after 18-month long investigation condemning Facebook’s disinformation and fake news practices. Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards.
Read more
  • 0
  • 0
  • 2171
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-openai-lp-a-new-capped-profit-company-to-accelerate-agi-research-and-attract-top-ai-talent
Fatema Patrawala
12 Mar 2019
3 min read
Save for later

OpenAI LP, a new “capped-profit” company to accelerate AGI research and attract top AI talent

Fatema Patrawala
12 Mar 2019
3 min read
A move that has surprised many, OpenAI yesterday announced the creation of a new for-profit company to balance its huge expenditures into compute and AI talents. Sam Altman, the former president of Y Combinator who stepped down last week, has been named CEO of the new “capped-profit” company, OpenAI LP. But some worry that this move may result in making the innovative company no different from the other AI startups out there. With the OpenAI LP their mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world. OpenAI mentions on their blog that “returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.” Any returns beyond the cap amount will revert to OpenAI. OpenAI LP’s primary obligation is to advance the aims of the OpenAI Charter. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake. But the major reason behind the new for-profit subsidiary can be explicitly put up as OpenAI in need of more money. The company anticipates to spend billions of dollars in building large-scale cloud compute, attracting and retaining talented people, and developing AI supercomputers in the coming years. The cash burn rate of a top AI research company is staggering. Consider OpenAI’s recent OpenAI Five project — a set of coordinated AI bots trained to compete against human professionals in the video game Dota 2. OpenAI rented 128,000 CPU cores and 256 GPUs at approximately US$2500 per hour for the time-consuming process of training and fine-tuning its OpenAI Five models. Additionally consider the skyrocketing cost of retaining top AI talents. A New York Times story revealed that OpenAI paid its Chief Scientist Ilya Sutskever more than US$1.9 million in 2016. The company currently employs some 100 pricey talents for developing its AI capabilities, safety, and policies. OpenAI LP will be governed by the original OpenAI Board. Only a few on the Board of Directors are allowed to hold financial stakes, and those who do not will be able to vote on decisions if the financial interests are seen to conflict with OpenAI’s mission. People have linked the new for-profit company with OpenAI’s recent controversial decision to withhold the code and training dataset for their language model GPT-2, ostensibly due concerns they might be used for malicious purposes such as generating fake news. A tweet from a software engineer suggested an ulterior motive: “I now see why you didn’t release the fully trained model of #gpt2”. OpenAI Chairman and CTO Greg Brockman shot back: “Nope. We aren’t going to commercialize GPT-2.” OpenAI aims to forge a sustainable path towards long-term AI development. And it also plans to strike a balance between benefiting humanity and turning a profit. A big part of OpenAI’s appeal to top AI talents is it's not-for-profit character — will OpenAI LP mar that? And can OpenAI really strike a balance between benefiting humanity and turning a profit? Whether the for-profit shift will accelerate OpenAI’s mission or prove a detrimental detour remains to be seen, but the journey ahead is bound to be challenging. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words  
Read more
  • 0
  • 0
  • 2664

article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 3118

article-image-researchers-input-rabbit-duck-illusion-to-google-cloud-vision-api-and-conclude-it-shows-orientation-bias
Bhagyashree R
11 Mar 2019
3 min read
Save for later

Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias

Bhagyashree R
11 Mar 2019
3 min read
When last week, Janelle Shane, a Research Scientist in optics, fed the infamous rabbit-duck illusion example to the Google Cloud Vision API, it gave “rabbit” as a result. However, when the image was rotated at a different angle, the Google Cloud Vision API predicted a “duck”. https://twitter.com/JanelleCShane/status/1103420287519866880 Inspired by this, Max Woolf, a data scientist at Buzzfeed, further tested and concluded that the result really varies based on the orientation of the image: https://twitter.com/minimaxir/status/1103676561809539072 Google Cloud Vision provides pretrained API models that allow you to derive insights from input images. The API classifies images into thousands of categories, detects individual objects and faces within images, and reads printed words within images. You can also train custom vision models with AutoML Vision Beta. Woolf used Python for rotating the image and get predictions from the API for each rotation. He built the animations with R, ggplot2, and gganimate. To render these animations he used ffmpeg. Many times, in deep learning, a model is trained using a strategy in which the input images are rotated to help the model better generalize. Seeing the results of the experiment, Woolf concluded, “I suppose the dataset for the Vision API didn't do that as much / there may be an orientation bias of ducks/rabbits in the training datasets.” The reaction to this experiment was pretty torn. While many Reddit users felt that there might be an orientation bias in the model, others felt that as the image is ambiguous there is no “right answer” and hence there is no problem with the model. One of the Redditor said, “I think this shows how poorly many neural networks are at handling ambiguity.” Another Redditor commented, “This has nothing to do with a shortcoming of deep learning, failure to generalize, or something not being in the training set. It's an optical illusion drawing meant to be visually ambiguous. Big surprise, it's visually ambiguous to computer vision as well. There's not 'correct' answer, it's both a duck and a rabbit, that's how it was drawn. The fact that the Cloud vision API can see both is actually a strength, not a shortcoming.” Woolf has open-sourced the code used to generate this visualization on his GitHub page, which also includes a CSV of the prediction results at every rotation. In case you are more curious, you can test the Cloud Vision API with the drag-and-drop UI provided by Google. Google Cloud security launches three new services for better threat detection and protection in enterprises Generating automated image captions using NLP and computer vision [Tutorial] Google Cloud Firestore, the serverless, NoSQL document database, is now generally available  
Read more
  • 0
  • 0
  • 5211

article-image-elizabeth-warren-wants-to-break-up-tech-giants-like-amazon-google-facebook-and-apple-and-build-strong-antitrust-laws
Sugandha Lahoti
11 Mar 2019
4 min read
Save for later

Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws

Sugandha Lahoti
11 Mar 2019
4 min read
Update: Facebook has removed and then restored several ads placed by Elizabeth Warren, that called for the breakup of Facebook and other tech giants. More details here. Last Friday, 2020 presidential hopeful, Senator Elizabeth Warren pinned a medium post stating that if elected president in 2020, her administration will make big, structural changes to the tech sector to promote more competition, “ including breaking up Amazon, Facebook, and Google.” She asserted on the same statement in a campaign rally held in Long Island City, Queens, on Friday. Her judgment - she wants to set up a government that makes sure all tech companies abide by rules and the next-generation American tech companies flourish. She wants to stop bigger tech firms from abusing their reach and presence to shape laws in their favor or buy every potential competitor. Warren highlights two strategies that Amazon, Facebook, Google, and Apple undertake to achieve a level of dominance. First, they use mergers to limit competition, which government regulators also allow instead of blocking them for their negative long-term effects on competition and innovation. Second, these companies also use proprietary marketplaces to limit competition, which can lead to a conflict of interest that undermines competition. For instance, Warren says, “Amazon crushes small companies by copying the goods they sell on the Amazon Marketplace and then selling its own branded version.” Designate companies as ‘Platform Utilities’ Warren’s proposal includes a plan to pass legislation, designating platforms with more than $25 billion in revenue as “platform utilities.” Platform utilities will be companies that offer to the public an online marketplace, an exchange, or a platform for connecting third parties. Warren says that “these companies would be prohibited from owning both the platform utility and any participants on that platform. They would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users and will not be able to transfer or share data with third parties.” Amazon Marketplace, Google’s ad exchange, and Google Search would be platform utilities under this law. These new requirements would be enforced and monitored by federal regulators, State Attorneys General, or injured private parties. A company found to violate these requirements would also have to pay a fine of 5 percent of annual revenue. Appoint new regulators to break up mergers Warren also says that in her administration she will appoint new federal regulators that would be responsible for breaking up any mergers that reduce competition and allow next-generation tech companies to flourish in the markets. These would include breaking Amazon’s merger of  Whole Foods and Zappos; Facebook’s WhatsApp and Instagram and Google’s Waze, Nest and DoubleClick. She adds, “unwinding these mergers will promote healthy competition in the market — which will put pressure on big tech companies to be more responsive to user concerns, including about privacy.” What does this mean? Her main aim with this initiative is to allow small scale companies to have a fair chance to compete in the market without being overthrown by big tech firms. Here’s what Twitterati had to say (mostly supportive). https://twitter.com/ZephyrTeachout/status/1104560119868723206 https://twitter.com/BatDaddyOfThree/status/1104138757110820866 https://twitter.com/dhh/status/1104076219534979072 https://twitter.com/maxwellstrachan/status/1104051512601382913 This antitrust proposal is indeed a good way to attract the attention of voters, however, it is yet to see how well it will be effective considering Facebook, Amazon, Google, have experienced several controversies in recent years, and almost passed by without having much impact on their user base. Nevertheless, Warren’s plan is by far one of the biggest tech regulation plan proposed so far in the 2020 presidential cycle.  If nothing, it is at least going to spark a major debate about antitrust policy among both Democrats and Republicans. UK lawmakers publish a report after 18-month long investigation condemning Facebook’s disinformation and fake news practices. Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards. Experts respond to Trump’s move on signing an executive order to establish the American AI initiative.
Read more
  • 0
  • 0
  • 2731
article-image-google-updates-handwriting-recognition-feature-in-gboard-ai-now-makes-40-fewer-mistakes
Natasha Mathur
11 Mar 2019
3 min read
Save for later

Google updates the AI handwriting Recognition feature in GboarD

Natasha Mathur
11 Mar 2019
3 min read
Google announced last week that it has improved the handwriting recognition feature in Gboard, Google’s popular keyboard for mobile devices, as it is quite fast and makes 20%-40% fewer mistakes than before. It was last year when Google added support for handwriting recognition in Gboard for Android that supported more than 100 languages. Also, advancements in Machine Learning allowed Google to come out with new model architectures and training methodologies. Google made changes to its initial approach that relied on hand-designed heuristics to build a single machine learning model. This machine learning model operates on the whole input and reduces error rates significantly as compared to the old version. Google also published a paper titled “Fast Multi-language LSTM-based Online Handwriting Recognition” explaining its research regarding online handwriting recognition. Google team states that since Gboard is used on a range of devices and screen resolutions, their first measure involves normalizing the touch-point coordinates. Then, the team converts the sequence of points into a sequence of cubic Bézier curves, which are then further used as inputs to a recurrent neural network (RNN). This RNN is trained to accurately identify the character being written. Bézier curves provide a consistent representation of the input across devices consisting of different sampling rates and accuracies. Another benefit is that the sequence of Bézier curves is way more compact than the underlying sequence of input points. This makes it easier for the model to pick up temporal dependencies along the input. Now, although the sequence of curves represents the input, there is still a need for the researchers to translate the sequence of input curves into the actual written characters. Hence, a multi-layer RNN is used in order to process the sequence of curves and produce an output decoding matrix. Researchers settled on using a bidirectional version of Quasi-recurrent neural networks (QRNN). QRNNs alternate between convolutional and recurrent layers, and offers good predictive performance. Additionally, in order to "decode" the curves, RNN produces a matrix, where each column corresponds to one input curve, and each row corresponds to a letter in the alphabet The QRNN-based recognizer converts the curves’ sequence into character sequence probabilities of the same length. Also, to offer the best user-experience, accurate recognition models are not enough. This is why researchers have converted their recognition models (trained in TensorFlow) to TensorFlow Lite models. “We will continue to push the envelope beyond improving the Latin-script language recognizers. The Handwriting Team is already hard at work launching new models for all our supported handwriting languages in Gboard”, states the Google team. For more information, check out the official Google AI blog. Google Cloud security launches three new services for better threat detection and protection in enterprises Google releases a fix for the zero day vulnerability in its Chrome browser while it was under active attack Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training
Read more
  • 0
  • 0
  • 1859

article-image-blue-oak-council-publishes-model-license-version-1-0-0-to-simplify-software-licensing-for-everyone
Natasha Mathur
11 Mar 2019
2 min read
Save for later

Blue Oak Council publishes model license version 1.0.0 to simplify software licensing for everyone

Natasha Mathur
11 Mar 2019
2 min read
Blue Oak Council Inc, a Delaware nonprofit corporation, published a model license version 1.0.0, last week. The new license demonstrates all the techniques used by the licenses to make the software free and simple for everyone to use and build on. The licensing materials published by Blue Oak is in everyday language, making it easy for developers, lawyers, and others to understand software licensing without relying on legal help. Blue Oak model license 1.0.0 comes with information regarding purpose, acceptance, copyright, notices, excuse, patent, reliability, and no liability. The license states that it provides everyone with as much permission to work with the software as possible. It also protects the contributors from liability. It states that users must agree to the rules of the license to receive it. Users should refrain from doing things that would defy the rules of the license. Additionally, it states that everyone who gets a copy of any part of the software (with or without changes) also receives a text of this license or a link to Blue Oak Council license 1.0.0. Also, in case anyone notifies the users in writing that they have not complied with Notices, then they can keep their license by taking all the practical and necessary steps to needed to comply within 30 days, post-notice. If users fail to follow this, the license will end immediately. Apart from this, Blue Oak Council has also published certain example provisions for contracts and grants, along with a corporate open source policy that helps with the permissive licenses. There’s also a list of permissive public software licenses on the OSI and SPDX lists. These licenses have been rated from gold to lead, based on criteria such as the clarity of drafting, simplicity, and practicality of conditions. For more information, check out the official Blue Oak Council blog post. Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Neo4j Enterprise Edition is now available under a commercial license Free Software Foundation updates its licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 1689

article-image-microsoft-researchers-introduce-a-new-climate-forecasting-model-and-a-public-dataset-to-train-these-models
Natasha Mathur
08 Mar 2019
3 min read
Save for later

Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models

Natasha Mathur
08 Mar 2019
3 min read
Microsoft researcher Lester Mackey and his teammates along with grad students, Jessica Hwang, and Paulo Orenstein have come out with a new machine learning based forecasting model along with a comprehensive dataset, called SubseasonalRodeo, for training the subseasonal forecasting models. Subseasonal forecasting models are systems that are capable of predicting the temperature or precipitation 2-6 weeks in advance in the western contiguous United States. The SubseasonalRhodeo dataset can be found at the Harvard Dataverse. Researchers have presented the details about their work in the paper titled “Improving Subseasonal Forecasting in the Western U.S. with Machine Learning”. “What has perhaps prevented computer scientists and statisticians from aggressively pursuing this problem is that there hasn’t been a nice, neat, tidy dataset for someone to just download ..and use, so we hope that by releasing this dataset, other machine learning researchers.. will just run with it,” says Hwang. Microsoft team states that a large amount of high-quality historical weather data along with the existing computational power makes the process of statistical forecast modeling worthwhile. Also, clubbing together the physics-based and statistics-based approaches lead to better predictions. The team’s machine learning based forecasting system combines the two regression models that are trained on its SubseasonalRodeo dataset. The dataset consists of different weather measurements dating as far back as 1948. These weather measurements include temperature, precipitation, sea surface temperature, sea ice concentration, and relative humidity and pressure. This data is consolidated from sources like the National Center for Atmospheric Research, the National Oceanic and Atmospheric Administration’s Climate Prediction Center and the National Centers for Environmental Prediction. First of the two models created by the team is a local linear regression with multitask model selection, or MultiLLR. Data used by the team was limited to an eight-week span in any year around the day for which the prediction was being made. There was also a selection process which made use of a customized backward stepwise procedure where two to 13 of the most relevant predictors were consolidated to make a forecast. The second model created by the team was a multitask k-nearest neighbor autoregression, or AutoKNN. This model incorporates the historical data of only the measurement being predicted such as either the temperature or the precipitation. Researchers state that although each model performed better on its own as compared to the competition’s baseline models, namely, a debiased version of the operational U.S. Climate Forecasting System (CFSv2) and a damped persistence model, they deal with different parts of the challenges associated with the subseasonal forecasting. For instance, the first model created by the researchers makes use of only the recent history to make its predictions while the second model doesn’t account for other factors. So the team’s final forecasting model was a combination of the two models. The team will be further expanding its work to the Western United States and will continue its collaboration with the Bureau of Reclamation and other agencies. “I think that subseasonal forecasting is fertile ground for machine learning development, and we’ve just scratched the surface,” mentions Mackey. For more information, check out the official Microsoft blog. Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 3725
article-image-deepmind-researchers-provide-theoretical-analysis-on-user-recommender-system
Natasha Mathur
07 Mar 2019
3 min read
Save for later

DeepMind researchers provide theoretical analysis on recommender system, 'echo chamber' and 'filter bubble effect'

Natasha Mathur
07 Mar 2019
3 min read
DeepMind researchers published a paper last week, titled ‘Degenerate Feedback Loops in Recommender Systems’. In the paper, researchers provide a new theoretical analysis examining the user dynamics role and the behavior of recommender systems, that can help remove the echo chamber from the filter bubble effect. https://twitter.com/DeepMindAI/status/1101514121563041792 Recommender systems are aimed to provide users with personalized product and information offerings. These systems take into consideration the user’s personal characteristics and past behaviors to generate a list of items that have been personalized as per the user’s tastes. Although very successful, there are certain concerns related to the systems that it might lead to a self-reinforcing pattern of narrowing exposure and a shift in user’s interest. These problems are often called the “echo chamber” and “filter bubble”. In the paper, researchers define echo chamber as user’s interest being positively or negatively reinforced due to the repeated exposure to a certain category of items. For “filter bubble”, researchers use the definition introduced by Pariser (2011) that states that the recommender systems select limited content to serve the users online. Researchers have considered a recommender system that is capable of interacting with a user over time. At every time step, the recommender system serves a different number of items (or categories of items such as news articles, videos, or consumer products) to a user from a set of finite or countably infinite items. The goal of this recommender system is to provide those items to a user that she/he might be interested in.                     The interaction between the recommender system and user The paper also considers the fact that the user’s interaction with the recommender system can change depending on her interest in different items for the next interaction. Additionally, to further analyze the echo chamber or filter bubble effect in recommender systems, researchers track when the user’s interest changes extremely. Futhermore, researchers used the dynamical system framework to model the user’s interest. They treated the interest extremes of the users as the degeneracy points within the system. For the recommender system, researchers discussed the influence on the degeneracy speed of these three independent factors in system design including model accuracy, amount of exploration, and the growth rate of the candidate pool. As per the researchers, continuous random exploration along with linearly growing the candidate pool is the best methods against system degeneracy. Although this research is quite effective, it still has two main limitations. The first limitation is that user interests are hidden variables and are not observed directly which is why a good measure for user interests is needed for practice to reliably study the degeneration process. Secondly, since the researchers have assumed the items and users being independent of each other, the theoretical analysis has been extended to study possibly mutually dependent items and users in the future. For more information, check out the official research paper. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Blizzard set to demo Google’s DeepMind AI in StarCraft 2 Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games
Read more
  • 0
  • 0
  • 4435

article-image-openai-and-google-introduce-activation-atlases-a-technique-for-visualizing-neuron-interaction-in-ai-systems
Melisha Dsouza
07 Mar 2019
3 min read
Save for later

OpenAI and Google introduce ‘Activation atlases’, a technique for visualizing neuron interaction in AI systems

Melisha Dsouza
07 Mar 2019
3 min read
Open AI and Google have introduced a new technique called “Activation atlases” for visualizing the interactions between neurons. This technique aims to provide a better understanding of the internal decision-making processes of AI systems and identify their weakness and failures. Activation atlases are built on ‘feature visualization’, a technique for understanding what the hidden layers of neural networks can represent and in turn make machine learning more accessible and interpretable. “Because essential details of these systems are learned during the automated training process, understanding how a network goes about its given task can sometimes remain a bit of a mystery”, says Google. Activation Atlases will simply answer the question of what an image classification neural network actually "sees" when provided with an image, thus giving users an insight into the hidden layers of a network. Open AI states that “With activation atlases, humans can discover unanticipated issues in neural networks--for example, places where the network is relying on spurious correlations to classify images, or where re-using a feature between two classes lead to strange bugs. Humans can even use this understanding to “attack” the model, modifying images to fool it.” Working of Activation Atlases Activation atlases are built from a convolutional image classification network, Inceptionv1, trained on the ImageNet dataset. This network progressively evaluates image data through about ten layers. Every layer is made of hundreds of neurons and every neuron activates to varying degrees on different types of image patches. An activation atlas is built by collecting the internal activations from each of these layers of the neural network from the images. These activations are represented by a complex set of high-dimensional vectors and are projected into useful 2D layouts via UMAP. The activation vectors then need to be aggregated into a more manageable number. To do this, a grid is drawn over the 2D layout that was created. For every cell in the grid, all the activations that lie within the boundaries of that cell are averaged. ‘Feature visualization’ is then used to create the final representation. An example of Activation Atlas Here is an activation atlas for just one layer in a neural network: An overview of an activation atlas for one of the many layers within Inception v1 Source: Google AI blog Detectors for different types of leaves and plants Source: Google AI blog Detectors for water, lakes and sandbars Source: Google AI blog Researchers hope that this paper will provide users with a new way to peer into convolutional vision networks. This, in turn, will enable them to see the inner workings of complicated AI  systems in a simplified way. Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists
Read more
  • 0
  • 0
  • 2777