Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-digital-ocean-announces-managed-databases-for-postgresql
Savia Lobo
15 Feb 2019
3 min read
Save for later

Digital Ocean announces ‘Managed Databases for PostgreSQL’

Savia Lobo
15 Feb 2019
3 min read
Yesterday, the team at Digital Ocean, a fully managed and feature-rich database service provider, announced the ‘Managed Databases for PostgreSQL’ as a Valentine gift for the users. The new Managed Databases along with the PostgreSQL support will allow developers to quickly build a scalable, high-performance database cluster with less hassle. One of the interesting features of this new provision is that one need not know anything about the Linux operating system or specific DevOps maintenance tasks. Managed databases take care of some challenges including: Help to identify the optimal database infrastructure footprint Scale infrastructure while business and data requirements grow Help in designing and managing highly available infrastructure and failover processes Implement a complete and reliable backup and recovery strategy Aid in forecasting and maintaining operational infrastructure costs The team at Digital Ocean writes, “You’ll enjoy simple, predictable pricing that allows you to control your costs. Spin up a database node starting from $15 per month or high availability cluster from $50 per month. Backups are included for free with your service to keep things simple. Ingress bandwidth is always free, and egress fees ($0.01/GB per month) will be waived for 2019.” Benefits of Managed Databases A hassle-free database maintenance Managed databases save a lot of time. All the user has to do is, quickly deploy a database, and the databases handle the rest. Users do not have to worry about security patches to the OS or database engine--once a new version or patch is available, just a simple click can enable it. Highly secure and optimized for performance All data in these newly managed databases is encrypted at rest and in transit. One can use the Cloud Firewall to restrict connections to their respective database. The database runs on enterprise-class VM hardware with local SSD storage, thus, giving the user a lightning-fast performance. Easy scalability With Managed Databases, users can scale up at any time without impacting their application, virtually. One can spin up read-only nodes to scale read operations or remove compute overhead from reporting requirements. Automatic failovers If any issue occurs with the primary node, traffic will automatically get routed to the standby nodes. The team at Digital ocean recommends selecting a high-availability option to minimize the impact in case of a failure. Simple and reliable backup and recovery solution Backups are handled automatically and free of cost. Full backups are taken every day and write-ahead-logs are maintained to allow users to restore to any point-in-time during the retention period. To know more about these new Managed Databases, visit the Digital Ocean website. Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records Google Cloud Firestore, the serverless, NoSQL document database, is now generally available 2018 is the year of graph databases. Here’s why.
Read more
  • 0
  • 0
  • 2293

article-image-openais-new-versatile-ai-model-gpt-2-can-efficiently-write-convincing-fake-news-from-just-a-few-words
Natasha Mathur
15 Feb 2019
3 min read
Save for later

OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words

Natasha Mathur
15 Feb 2019
3 min read
OpenAI researchers demonstrated a new AI model, yesterday, called GPT-2, that is capable of generating coherent paragraphs of text without needing any task-specific training. In other words, give it the first line of a story, and it’ll form the rest. Apart from generating articles, it can also perform rudimentary reading comprehension, summarization, machine translation, and question answering.   GPT-2 is an unsupervised language model comprising 1.5 billion parameters and is trained on a dataset of 8 million web pages. “GPT-2 is simply trained to predict the next word in a 40GB of internet tex”, says the OpenAI team. The OpenAI team states that it is superior to other language models trained on specific domains (like Wikipedia, news, or books) as it doesn’t need to use these domain-specific training datasets. For languages related tasks such as question answering, reading comprehension, and summarization, GPT-2 can learn these tasks directly from the raw text and doesn’t require any training data. The OpenAI team states that the GPT-2 model is ‘chameleon-like’ and easily adapts to the style and content of the input text. However, the team has observed certain failures in the model such as repetitive text, world modeling failures, and unnatural topic switching. Finding a good sample depends on the familiarity of the model with that sample’s context. For instance, when the model is prompted with topics that are ‘highly represented in data’ like Miley Cyrus, Lord of the rings, etc, it is able to generate reasonable samples 50% of the time. On the other hand, the model performs poorly in case of highly technical or complex content. The OpenAI team has specified that it envisions the use of GPT-2 in development of AI writing assistants, advanced dialogue agents, unsupervised translation between languages and enhanced speech recognition systems. It has also specified the potential misuses of GPT-2 as it can be used to generate misleading news articles, and automate the large scale production of fake and phishing content on social media. Due to the concerns related to this misuse of language generating models, OpenAI has decided to release a ‘small’ version of GPT-2  with its sampling code and a research paper for researchers to experiment with. The dataset, training code, or GPT-2 model weights have been excluded from the release. The OpenAI team states that this release strategy will give them and the overall AI community the time to discuss more deeply about the implications of such systems. It also wants the government to take initiatives to monitor the societal impact of AI technologies and to track the progress of capabilities in these systems. “If pursued, these efforts could yield a better evidence base for decisions by AI labs and governments regarding publication decisions and AI policy more broadly”, states the OpenAI team. Public reaction to the news is positive, however, not everyone is okay with OpenAI’s release strategy, and feels that the move signals towards ‘closed AI’ and propagates the ‘fear of AI’: https://twitter.com/chipro/status/1096196359403712512 https://twitter.com/ericjang11/status/1096236147720708096 https://twitter.com/SimonRMerton/status/1096104677001842688 https://twitter.com/AnimaAnandkumar/status/1096209990916833280 https://twitter.com/mark_riedl/status/1096129834927964160 For more information, check out the official OpenAI GPT-2 blog post. OpenAI charter puts safety, standards, and transparency first OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners OpenAI builds reinforcement learning based system giving robots human like dexterity
Read more
  • 0
  • 0
  • 4587

article-image-reddits-2018-transparency-report-includes-copyright-removals-restorations-and-more
Savia Lobo
14 Feb 2019
4 min read
Save for later

Reddit’s 2018 Transparency report includes copyright removals, restorations, and more!

Savia Lobo
14 Feb 2019
4 min read
Yesterday, the Reddit community released the Transparency Report for the year 2018. The report includes additional information on copyright removals, restorations, and retractions as well as removals for violations of Reddit’s Content Policy and subreddit rules. In 2018, Reddit received more than half of governmental requests--around 310. Reddit carefully reviewed each request for compliance with legal standards and followed the procedure. Of the 752 requests submitted by governmental entities: 171 were requests to preserve user account information; and 581 were requests to produce user account information. According to the report, “In 2018, Reddit received 171 preservation requests, a 116% increase over the 79 preservation requests received in 2017. Reddit complied with 91% of the preservation requests received.” Source: Reddit report In 2018, Reddit received 752 requests for the preservation or production of user account information from governmental entities. Reddit carefully reviewed each request for compliance with legal standards and followed the procedure described in Reddit’s Privacy Policy. Reddit also sometimes receives a request from a governmental entity to produce information. On receiving such a request, Reddit reviews the request to ensure it is consistent with ECPA and that is otherwise legally valid. In 2018, Reddit received a total of 581 requests to produce user account information from both United States and foreign governmental entities. This represents a 151% increase compared to the number received in 2017. Source: Reddit report Reddit received 319 non-emergency pieces of legal process from United States governmental entities seeking the production of user account information such as subpoenas, court orders, and search warrants. It also received 28 requests for the production of user account information from foreign governmental authorities (excluding emergency requests). It also received a total of 234 Emergency Disclosure Requests, globally. Reddit disclosed user account information in response to 162 (69%) of these requests. Along with governmental requests, Reddit also received 15 requests for private user information from non-governmental entities. This is an increase from the 5 non-governmental requests received in 2017. Reddit also received content removal requests from: governmental entities and other civil legal demands, for reasons such as alleged violations of local laws; copyright owners regarding alleged copyright infringement; and users or Reddit administrators regarding violations of Reddit’s Content Policy. One of the request to remove content from a government entity in the US which had nothing to do with copyright. “The request was for the removal of an image and a large volume of comments made underneath it for potential breach of federal law," the report says. "As the governmental entity did not provide sufficient context regarding how the image violated the law, did not provide Reddit with valid legal process compelling removal, and the request to remove the entire post as well as the comment thread appeared to be overbroad, Reddit did not comply with the request." According to the report, prior to 2018, each piece of content that was requested to be removed was counted as a distinct DMCA notice. This resulted in the DMCA notice numbers reported in previous Transparency Reports (i.e. in 2016 = 3,294 “notifications”, and in 2017 = 7,825 “notifications”). The report states, “The number of notices Reddit received in 2018 more than tripled from the 3,130 DMCA notices (and the 7,825 removal requests) received in 2017, and increased by over 8 times from the 1,155 notices (and 3,294 removal requests) received in 2016.” Source: Reddit report When asked about the Reddit’s 2018 transparency report, Reddit’s CEO Steve Huffman said, “This year, we expanded the report to included details on two additional types of content removals: those taken by us at Reddit, Inc., and those taken by subreddit moderators (including Automod actions). We remove content that is in violation of our site-wide policies, but subreddits often have additional rules specific to the purpose, tone, and norms of their community. You can now see the breakdown of these two types of takedowns for a more holistic view of company and community actions.” To know more about the report in detail, read Reddit’s Transparency report 2018. Reddit has raised $300 million in a new funding round led by China’s Tencent Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’ Reddit posts an update to the FireEye’s report on suspected Iranian influence operation
Read more
  • 0
  • 0
  • 1846
Visually different images

article-image-liz-fong-jones-prominent-ex-googler-shares-her-experience-at-google-and-grave-concerns-for-the-company
Natasha Mathur
14 Feb 2019
5 min read
Save for later

Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company

Natasha Mathur
14 Feb 2019
5 min read
“I can no longer bail out a raft with a teaspoon while those steering, punch holes in it”-- Liz Fong Jones Liz Fong Jones, former Google Engineer and an activist known for being outspoken about employee rights in the Silicon Valley, published a post on Medium yesterday. In the post, she talks about the ‘grave concerns’ related to strategic decisions made at Google and the way it has ‘misused its power’ by keeping profits above the well-being of people. Jones, who emerged as a prominent figure in the field of Site Reliability Engineering had joined Google 11 years ago. However, she left the company last month, citing Google’s lack of leadership in response to the Google walkout demands in November 2018. “I can’t continue burning myself out pushing for change. Instead, I am putting my own health first by joining a workplace that has a more diverse and fair working environment”, writes Jones. Google Walkout was a response to a report on workplace sexual harassment by the New York Times in October 2018. The report revealed that Google protected its senior execs accused of sexual abuse within the workplace and also paid them heavy exit packages ($90 million payout to Andy Rubin). Jones mentions that it was this event that “utterly shattered employees’ trust and goodwill in the management”. She mentions that Google management failed to effectively address the Google Walkout demands. Apart from not meeting the structural demand for having an employee representative on board, Google also didn’t entirely put an end to forced arbitration within the workplace. A group of Google employees, called ‘Googlers for ending forced arbitration’ launched a public awareness social media campaign, last month, to educate people across industries about the forced arbitration policy via Instagram and Twitter. They argued that the Google announcement regarding ending forced arbitration only made up for strong headlines, and did not actually do enough for the employees. This is because although Google made forced arbitration optional in case of sexual harassment for the employees ( excluding contractors, temps), it didn’t do so for other forms of discrimination. Google TVCs also wrote an open letter to Google’s CEO, last December, demanding for equal benefits and treatment. They also reiterated the demands of the walkout in the letter. Additionally, two shareholders, James Martin, and the pension funds also sued Alphabet’s board members for protecting the top execs accused of sexual assault, last month. The lawsuit alleged that Google directors agreed to pay Rubin to ‘ensure his silence’ and suppress information about the misconduct of other executives. Jones further states that Google also filed a motion before the National Labor Relations Board to overturn a ruling that permitted employees to organize on company email and document systems. Other than the issues faced during the Google walkout, Jones also wrote about issues during the Google+ launch, when the Google employees including herself, opposed against the Google + ‘real name’ policy. As per the policy, people have to use their legal names on the platform. “In doing so, Google+ would create yet another space inaccessible to some teachers, therapists, LGBT+ people, and others who need to use a different identity for privacy and safety”, mentions Jones. She writes that despite the employees’ opposition, Google+ launched in mid-2011 with a real-name policy. Moreover, there was also an increase in harassment, doxxing (harassment method that reveals a person’s personal information on the internet) and hate speech targeted at marginalized employees within Google’s internal communications. Management, however, silently tolerated it. In fact, in case the employees attempted to internally raise concerns about harassment (via the official channels), they were either ignored or punished for doing so. Another common issue was an increasing willingness to compromise on ethics for profit. “Google will need to fundamentally change how it is run in order to win back the trust of workers and prevent a catastrophic loss of long-tenured employees, especially those from vulnerable groups”, writes Jones. Other than that, Jones is contributing $100,000 payout from Google to support Google workers (esp.contractors and H-1B workers) who may face retaliation for their future organizing. Other workers have also pledged a further $150,000. Other groups such as Coworker.org and the Tech Workers Coalition, are also helping Jones and other employees learn more about their rights. She also mentioned that although she’s no longer a part of Google, she will remain ‘fiercely loyal’ to its employees who have committed themselves to develop ethical products and who continue to advocate for equal and fair treatment of their fellow colleagues.  “The labor movement at Google is larger and stronger than ever, and it will continue to advance human rights..regardless of whether management supports them”, writes Jones. For complete information, check out the official Liz Fong Jones Medium post. Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’ Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers
Read more
  • 0
  • 0
  • 2216

article-image-regulate-google-facebook-and-other-online-platforms-to-protect-journalism-says-a-uk-report
Bhagyashree R
14 Feb 2019
4 min read
Save for later

Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report

Bhagyashree R
14 Feb 2019
4 min read
On Tuesday, Frances Cairncross, a British economist, journalist, and academic shared her work in a report, which touches upon the present and future of the news media market. This independent report named, The Cairncross Review: A Sustainable Future of Journalism, suggests that online platforms including Google, Facebook, Apple, and Twitter should be regulated in their way of distributing news. Along with giving an insight into the current state of the news media market, it suggests some measures online platforms could take to ensure the financial sustainability of publishers. It also shows how search engines, social media platforms, and digital advertising are impacting the news market. The report highlights that most part of the digital advertising revenue goes to Google and Facebook because of their reach and also the personal data of users they collect. This makes it difficult for publishers to compete and as a result, their revenue has seen some dip. To address this revenue gap between online platforms and the publishers and also prevent the spread of misinformation, the review suggests that it’s time for the government to step in. How government can bridge the revenue gap and prevent the spread of fake news? The review proposes online platforms to define ‘codes of conduct’ to put a check on the commercial arrangements between publishers and online platforms. To ensure compliance with these codes of conduct, this process should be overseen by a government regulator. The regulator will be someone who has an understanding of both economics and digital technology. They will have the powers to command information and can also set out a compulsory set of minimum requirements for these codes. In recent years, online platforms have come constantly under public scrutiny because of the spread of fake news and misinformation. That is why these platforms started putting into place some measures to help users identify reliability and the trustworthiness of sources. Though the review recommends expanding these efforts, it adds that “This task is too important to leave entirely to the judgment of commercial entities.” This essentially means these measures will also be regulated. Initially, this will limit to gathering information on the steps online platforms are taking to improve people’s awareness of the origins and quality of the news they read. In a discussion on Hacker News, some users were for this proposal while some were against. One of the users commented, ”So on to the idea that Google and Facebook should be regulated, I think it's an absolutely horrible idea. We are talking of censorship conducted by the government, the worst kind there is. And thinking of our own government, I can't think of people that are more corrupt or incompetent. Just fucking educate people on fact-checking and elementary logic. Push for some lessons in high-school or whatever.” Another user added, “Facebook or Google need to be regulated because they are radicalizing people in search of more engagement to sell more ads. That is not a good situation, and there is no incentive for them to stop doing it. As it is the best way to get more money.” In a different report published by The Wall Street Journal, it was reported that Apple and major publishers are negotiating over a subscription news service. Apple has suggested a revenue split according to which 50% of a suggested $10/month membership fee will go to Apple. The remaining 50% will be shared among the participating publishers. The WSJ reports that publishers will most likely not be agreeing to this revenue split. For more details, read the report published by Frances Cairncross. Google announces the general availability of a new API for Google Docs Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’
Read more
  • 0
  • 0
  • 1839

article-image-fair-releases-a-new-elf-opengo-bot-with-a-unique-archive-that-can-analyze-87k-professional-go-games
Natasha Mathur
14 Feb 2019
3 min read
Save for later

FAIR releases a new ELF OpenGo bot with a unique archive that can analyze 87k professional Go games

Natasha Mathur
14 Feb 2019
3 min read
It was last year in May when Facebook AI Research (FAIR) released an open source ‘ELF’ OpenGo bot, an AI bot that has defeated world champion professional Go players, based onits existing ELF platform for Reinforcement Learning Research. Yesterday, FAIR announced new features and research results related to ELF OpenGo, including an updated model, a Windows executable version of the bot, and a unique archive analyzing 87k professional Go games. ELF OpenGo, an open-source reimplementation of the AlphaZero algorithm, is the first open-source Go AI that has convincingly demonstrated superhuman performance, achieving a (20:0) record against global top professionals. The FAIR team have updated the ELF OpenGo model, by re-training the model from scratch, the team has also provided a data set of 20 million self-play games and the 1,500 intermediate models used to generate them. This in turn, further reduces the need for computing resources. Putting the new model to test The FAIR team further used the new model to analyze the games played by professional human players, however, they observed that its ability to predict the players’ moves went down very early during its learning process (after less than 10 percent of the total training time). But, as the model continued undergoing training, its skills levels also kept on improving. This ultimately led to it beating the team’s earlier prototype ELF OpenGo model 60 percent of the time. “That prototype system was already outperforming human experts, having achieved a 20-0 record against four Go professionals ranked among the top 30 players in the world”, states the team. However, the exploration of ELF OpenGo's learning process also revealed some important limitations specific to deep RL. For instance, similar to AlphaZero, new ELF OpenGo model never fully master the concept of “ladders" i.e. a common technique in which one player traps the other's stones in a long formation. FAIR team then curated a data set of 100 ladder scenarios and evaluated ELF OpenGo's performance with them. Analyzing 87k professional Go games The team has come out with a new interactive tool based on ELF OpenGo's analysis of 87,000 games played by humans. This data set spans 1700 to 2018 and their system is constantly evaluating the quality of individual moves depending on the agreement between the moves predicted by the bot and the human players. “Though the tool encourages deep dives into specific matches, it also highlights important trends in Go. In analyzing games played over that period of more than 300 years, the bot found average strength of play has improved fairly steadily”, states the team. You can also analyze individual players such as Honinbo Shusaku, (the most famous Go player in history), which in turn would show different trends in comparison with ELF OpenGo, depending on the stage within the gameplay. “Though ELF OpenGo is already being used by research teams and players around the world, we're excited to expand last year's release into a broader suite of open source resources. By making our tools and analysis fully available, we hope to accelerate the AI community's pursuit of answers to these questions”, states the FAIR team. For more information, check out the official ELF OpenGo Bot announcement. Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers
Read more
  • 0
  • 0
  • 3317
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-batch-a-special-case-of-streaming
Amrata Joshi
14 Feb 2019
4 min read
Save for later

Batch: a Special Case of Streaming

Amrata Joshi
14 Feb 2019
4 min read
Last week, the team at Apache announced that Alibaba decided to contribute its Flink-fork, called Blink, back to the Apache Flink project. A unified approach to Batch and Streaming Apache Flink has been following the philosophy of taking a unified approach to batch and streaming data processing. The core building block is “continuous processing of unbounded data streams.” With this continuous processing, users can also do offline processing of bounded data sets. The batch is considered as the special case of streaming and is supported by various projects such as Flink, Beam, etc. It is known as a powerful way of building data applications that generalize across real-time and offline processing and further reduces the complexity of data infrastructures. “Batch is just a special case of streaming does not mean that any stream processor is now the right tool for your batch processing use cases.” Pure stream processing systems are slow at batch processing workloads. A stream processor that shuffles through message queues to analyze large amounts of available data is not useful. Unified APIs such as Apache Beam delegate to different runtimes based on whether the data is continuous/unbounded of fix/bounded. For example, the implementations of the batch and streaming runtime of Google Cloud Dataflow are different, to get the desired performance and resilience in each case. Apache Flink has a streaming API that can do bounded/unbounded use cases and also offers a separate DataSet API and runtime stack which is faster for batch use cases. What can be improved? To make Flink’s experience on bounded data (batch) state-of-the-art, few enhancements are required. A truly unified runtime operator stack Currently, the bounded and unbounded operators have a different network and threading model which doesn’t mix and match. Continuous streaming operators are the foundation in a unified stack. While operating on bounded data without latency constraints, the API or the query optimizer can easily select from a larger set of operators. Exploiting bounded streams to reduce the scope of fault tolerance While input data is bounded, it is possible to completely buffer data during shuffles and also replay that data after a failure. This makes recovery fine-grained and much more efficient. Exploiting bounded stream operator properties for scheduling A continuous unbounded streaming application needs all the operators that are running at the same time. An application on bounded data can schedule operations depending on how the operators consume data which increases resource efficiency. Enabling these special case optimizations for the DataStream API Currently, only the Table API activates the optimizations while working on bounded data. Performance and coverage for SQL In order to be competitive with the best batch engines, Flink needs more coverage and performance for the SQL query execution. As the core data-plane in Flink is high performance, the speed of SQL execution depends on optimizer rules, a rich set of operators, and also features like code generation. Merging Blink and Flink As Blink’s code is currently available as a branch in the Apache Flink repository, it is difficult to merge big amount of changes and making the merge process as non-disruptive as possible. The merge plan focuses on the bounded/batch processing features and follows the following approach to ensure a smooth integration: For merging Blink’s SQL/Table API query processor enhancements, the team can work in an easier way as both Flink and Blink have the same APIs: SQL and the Table API. Following some restructuring of the Table/SQL module, the team plans to merge the Blink query planner (optimizer) and runtime (operators) as an additional query processor next to the current SQL runtime. Users will be able to select which query processor to use, initially. After a transition period, the current processor will be deprecated and eventually dropped. The Flink community is working on refactoring its current schedule and adding support for pluggable scheduling and fail-over strategies. Once this is done, the team can add Blink’s scheduling and recovery strategies as a new scheduling strategy that will be used by the new query processor. The new scheduling strategy will also be used for bounded DataStream programs. To know more, check out Apache’s official post. LLVM officially migrating to GitHub from Apache SVN Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers  
Read more
  • 0
  • 0
  • 1718

article-image-experts-respond-to-trumps-move-on-signing-an-executive-order-to-establish-the-american-ai-initiative
Amrata Joshi
14 Feb 2019
4 min read
Save for later

Experts respond to Trump’s move on signing an executive order to establish the American AI Initiative

Amrata Joshi
14 Feb 2019
4 min read
On Monday, U.S. President Donald Trump signed an executive order laying out a national plan to boost the leadership in Artificial Intelligence (AI) technology by establishing American AI Initiative. According to most of the experts, this move seems to be aimed at China's swift rise in AI. Nearly two years ago, the Chinese government released its own sweeping AI plan and has committed tens of billions of dollars in spending toward developing it. But it seems Trump's new order indicates the American response to China. The official announcement framed it as an effort to win an AI arms race. The announcement states, “Americans have profited tremendously from being the early developers and international leaders in AI. However, as the pace of AI innovation increases around the world, we cannot sit idly by and presume that our leadership is guaranteed.” The announcement further mentioned five major areas of action: Investing in AI Research and Development (R&D) by having federal agencies increase funding for AI R&D Making federal data and computing power more available for AI purposes and further unleashing AI resources Setting AI government standards for safe and trustworthy AI Building and training an AI workforce Engaging with international allies and also protecting the tech from foreign adversaries Trump said in a statement, accompanying the order, "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States." Trump’s executive order did not allocate any additional federal funding towards executing the AI vision. But the document, instead, calls on federal agencies to prioritize existing funds toward AI projects. Response by the experts In a response to IEEE Spectrum for the take on the announcement, most of the experts said that it might be a response to China’s AI policy, which calls for major investment to make China the world leader in AI by 2030. In an interview, the former head of Google China recently even explained to IEEE Spectrum why China has the edge in AI. According to Darrell West, director of the Brookings Institution’s center for technological innovation and author of the recent book, The Future of Work: Robots, AI, and Automation, Trump is trying to set the American AI Initiative at the top and compete well in the race of AI. Also, according to him, the idea seems unclear with respect to implementation. He said, “Trump is signing an executive order on AI because it is the transformative technology of our time and he needs a national strategy on how to retain U.S. preeminence in this area. Critics complain there is no national strategy, so he is using the executive order to explain how the government can help through R&D support, workforce development, and infrastructure enhancement. The order is a step in the right direction, but it is not clear whether there is new funding to support the initiative or how it will be implemented.” Daniel Castro, director of the Center for Data Innovation was a bit positive on this go and said, “Ensuring American leadership in artificial intelligence is critical for U.S. competitiveness. Accelerating the development and adoption of AI holds the potential to increase productivity, grow the economy, and harness the many societal benefits the technology can bring. The administration’s initiative will prioritize AI research and training programs and boost auxiliary infrastructure such as data and other inputs.” Amy Webb, a “quantitative futurist” and author of a forthcoming book about AI called The Big Nine: How the Tech Titans & Their Thinking Machines Could Warp Humanity, doesn’t envy the legislators. According to him the American AI Initiative is vague and lacks details. In a statement, he said, “The American AI Initiative at the moment is a collection of bullet points. It is vague at best and makes zero mention of detailed policy, a concrete funding plan, or a longer-term vision for America’s future.” Lawmakers and major tech companies are happy because of this move. Most of the tech companies now see this as an opportunity to cash in on AI. Intel said in a statement that, “It makes "perfect sense" for federal agencies to play a "key role in AI implementation." To know more about this news, check out the official announcement. The US to invest over $1B in quantum computing, President Trump signs a law Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology The U.S. just launched the American AI Initiative to prioritize AI research and development
Read more
  • 0
  • 0
  • 1938

article-image-tyler-tringas-co-founder-of-earnest-capital-goes-live-on-hacker-news-to-answer-comments-as-ec-launches
Sugandha Lahoti
14 Feb 2019
9 min read
Save for later

Tyler Tringas, co-founder of Earnest Capital goes live on Hacker News to answer comments as EC launches

Sugandha Lahoti
14 Feb 2019
9 min read
Yesterday, Tyler Tringas, co-founder of Earnest Capital went live on Hacker News to answer questions. For those not aware, Earnest Capital provides funding and mentorship for bootstrappers, indie startups, and hackers mostly in SaaS, e-commerce, and scalable online education. Earnest Capital has been receiving a lot of attention because of their different investing structure than traditional VCs or accelerators. They have a novel Shared Earnings Agreement investing model. Key attributes of a Shared Earnings Agreement by Earnest Capital We invest upfront capital at the early-stage of businesses. Typically (but not always) after a product has launched, but before the founders go full-time. We agree on a Return Cap which is a multiple of the initial investment (typically 3-5x) We don’t have any equity or control over the business. No board seats either. You run your business as you see fit. As your business grows we calculate what we call “Founder Earnings” and Earnest is paid a percentage. Essentially we get paid when you and your co-founder get paid. Founder Earnings = Net Income + any amount of founders’ salaries over a certain threshold. If you want to eat ramen, pay yourselves a small salary, and reinvest every dollar into growth, we don’t get a penny and that’s okay. We get earnings when you do. Unlike traditional equity, our share of earnings is not perpetual. Once we hit the Return Cap, payments to Earnest end. In most cases, we’ll agree on a long-term residual stake for Earnest if you ever sell the company or raise more financing. We want to be on your team for the long-term, but don’t want to provide any pressure to “exit.” If you decide you want to raise VC or other forms of financing, or you get an amazing offer to sell the company, that’s totally fine. The SEA includes provisions for our investment to convert to equity alongside the new investors or acquirers. The Hacker News conversation was a big hit, with Tyler mostly answering questions and offering pieces of advice to upcoming entrepreneurs while also explaining their novel strategy. Per Tyler, Earnest Capital works like a “profit-share + a SAFE”. The primary function is for them is to share in the profit (or more specifically "founder earnings") of the business alongside the founder(s). If later a business owner decides to sell the business or raise a big equity round later, Earnest converts into a SAFE. On how it is different from a Venture Capitalist Investment A user asked, “If I read your agreement correctly, your terms are so that you invest $150k in what is or nearly is a bootstrapped business, on what essentially seems like a profit share basis, and expect to get paid for your doing so until you've made $3M?” Tyler’s response, “$3m?! No. We have a Return Cap which is negotiated on a per deal basis but we guide toward 3-5x the initial investment. This post walks through each of the terms in detail.” Traditional VC model usually works in cycles. The founders raise some money, then they build and sell, raise some more, build and sell. This cycle continues by the time they have raised enough money to be at least close to a profit. “How does that work with bootstrappers? Ideally, they'd only have to raise money once (from you), but what happens after the $100k (example) run out and the business is only generating, let's say, $2k/month? Back to 9-to-5?” asked another user. Tyler argued that this is true only in some cases. Earnest Capital’s goal is for founders to get to “personal break-even, where they can pay themselves enough to work on the business full-time, by the time our investment runs out. Some percentage of these will fail (startups are hard) and we're expecting that”, he added. On comparison with TinySeed People also appreciated the novel non-VC funding space asking Tyler to do a quick compare/contrast with TinySeed, (TinySeed is also a startup accelerator designed for bootstrappers) of similar ilk and recency. Tyler commented, “Specific to the funding model, we both do a kind of profit-share, with the main difference being that Earnest's repayment will usually happen earlier (assuming the business is successful) but is capped, Tinyseed payments would be smaller in the earlier years and keep growing over time perpetually. Neither one is "better" and I probably wouldn't advise founders to choose between an offer from both on the basis of just the funding model.” On their 3-5x return cap People also had concerns regarding the 3-5x return cap. Some called Earnest Capital “more of a charity than a profit-making enterprise?” to the extent of calling it altruistic. A Hacker news user observed, “For the math to work out, with a 3x cap on earnings, you need 33% of the businesses to be successful just to get your money back. At 5x you still need 20%. And that is over however long it takes for those companies to reach payback, which could be measured in decades for some of the companies.” To this, Tyler said, “We also have a residual, uncapped, % option if the founder ever sells the business. This keeps us aligned with the founders to keep helping them grow the value of the business for the long-term even after the Return Cap is paid back.” He added that they are preparing to fine tune the return cap model building, measuring, learning, and iterating as we go. Basically, he added, “By default, we don't take equity (shares, a board seat, none of that). If you decide to raise a round of equity financing (ie VC) we could convert into equity alongside them and if you sell the company we get a % of that.” A user countered it saying that “This return cap is stated as 9.5% in the spreadsheet. But, where did that come from? Is this is a stock option, convertible note, or equity position?” Another user added, “They don't explain it, but it sounds like the whole deal is effectively seed financing where they eventually get a 9.5% stake, but also with a 3-5x loan interest payment once you make money (which they're framing as "Shared Earnings"). And they're trying to hide the 9.5% part.” Tyler offered no comments on this thread. On being asked why return cap is better than just taking out like a 20-30% APR business loan, Tyler said, “At the risk of not answering the question, I'd say no form of capital is "better" than any other. Capital is a tool and the job of the founder is to find the option (both on payments term and other aspects like mentorship or personal exposure) that best aligns with their goals.” On his thoughts on Jerry Neumann’s theory Jerry Neumann, who is a Venture Capitalist at Neu Venture Capital has a theory about why "there's a reason why for decades, there were only bank loans and VC and not much in-between." Per his theory, there are 3 categories of companies (determined by the alpha value of the power-law distribution they're in): Companies where the risk and the upside potential are small. This is where bank loans are focused. Companies where the risk is enormous but the upside potential is "meh". Companies where the risk is enormous but the upside potential is also enormous. This is where VC is focused, and it's why they're all about finding those few big hits because this covers all the losses (or mediocre performance) of the rest. Neumann appears pretty confident about this hypothesis; not because he can explain the underlying phenomenon, but simply because until now he's not seen much successful funding for companies that's neither VC nor bank loans. A hacker news user points, that “if his hypothesis is right, then Earnest Capital is targeting companies of type 2: investments with enormous risk (comparable to that of a high-growth startup) but at the same time you're hard-capping your upside at 5x. That seems madness.” Tyler comments, “I like Jerry's work a lot but come to a different conclusion. My basic thesis is that we're in the deployment age of the internet/web/mobile era and there is a whole new wave of a lot lower risk and a bit fewer reward opportunities for companies to bring the "peace dividend" of the software areas into markets that are not winner-take-all. The upside is these businesses are much more capital efficient, can scale and potentially produce much higher returns than SMBs from previous eras. The downside is they have no collateral and are thus completely unbankable for traditional small business lending. We need a new default form of capital for entrepreneurs and we are trying to build it.” Overall people were generally appreciative of Earnest Capital and wished the company success. Here are some positive responses. “Hi Tyler, this is amazing! Going through the FAQ it seems, you're not investing in India right now! Would love to know if and when you do!” “Very cool, Tyler. Just applied. :)” “Congratulations on launching! Sounds like a nice compliment to IndieVC and TinySeed. I'm glad to see innovation here, interesting times!” “Awesome to see innovation from capital providers on the instrument in the wild. As a niche market founder wish option like Earnest existed when we were raising early financing.” We recommend you to go through the entire thread on Hacker News. It makes for a very insightful conversation. A Quick look at ML in algorithmic trading strategies Why Google kills its own products Mary Meeker, one of the premier Silicon Valley investors, quits Kleiner Perkins to start her own firm.
Read more
  • 0
  • 0
  • 1992

article-image-red-hat-satellite-to-drop-mongodb-and-will-support-only-postgresql-backend
Melisha Dsouza
14 Feb 2019
2 min read
Save for later

Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend

Melisha Dsouza
14 Feb 2019
2 min read
On 12th February, RedHat announced its plans to drop MongoDB from its Satellite system management solution. Satellite will now support only a single database - PostgreSQL. The move was made after the development team decided that a relational database with rollback and transactions was necessary for the features needed in Pulp and Satellite. The team says that PostgreSQL is a better solution in terms of the types of data and usage that Satellite requires. They say that a single database backend will also help to simplify the overall architecture of Satellite along with supportability, backup, and disaster recovery. Users will not suffer any significant performance impact with the removal of MongoDB nor will any features of Satellite be impacted because of the same. The embedded version of MongoDB will continue to be supported in the Satellite versions that it has already been released in. The Satellite team will create a patch for any issue that a user faces. Newer versions of MongoDB that are licensed under SSPL will not be used by Satellite. According to Dev Class, the concept of the SSPL has not been received well by the open source community. The Server Side Public License was MongoDB’s helped cloud service providers take the community edition of the database and offer it as service to paying customers. But anyone doing so should share the source code underlying the service. Following this news, Red Hat had also dropped MongoDB from Red Hat Enterprise Linux (RHEL) 8. This is because according to Tom Callaway, University outreach Team lead, Red Hat, SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be “Free” or “Open Source” causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The specific timeline of the change has not been released by the team, but this announcement was made simply to make users aware of the change that is coming. Uses can check the Satellite Blog to know more about this news. 4 reasons IBM bought Red Hat for $34 billion Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 4386
article-image-mozilla-partners-with-ubisoft-to-clever-commit-its-code-an-artificial-intelligence-assisted-assistant
Prasad Ramesh
13 Feb 2019
3 min read
Save for later

Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant

Prasad Ramesh
13 Feb 2019
3 min read
Yesterday, Mozilla announced a partnership with game developing company Ubisoft to develop Clever-Commit. It is an artificial intelligence based code assistant developed by Ubisoft La Forge. Ubisoft uses the assistant internally, and with this partnership, Firefox will try to find errors in their code. About 8,000 edits are made in every Firefox release by numerous developers. Using the assistant to save bugs on them can have a large scale effect in Firefox development. The assistant combines data from the bug tracking system and the codebase. Clever-Commit will analyze the changes in code as various developers commit code to the Firefox codebase. It then looks at the previously committed code to draw comparisons and find out buggy code. The developer is notified if Clever-Commit thinks that a code commit is not proper. This means that the bug could be fixed before a commit. It can even suggest solutions to the bugs it finds. Firefox uses C++, JavaScript, and Rust; Mozilla plans to use Clever-Commit for all of them to bring faster development. Clever-Commit is not open-source and there seem to be no immediate plans to make it freely available. But this ability to make inferences from large code bases is not exclusive to Clever-Commit. Microsoft has IntelliCode in Visual Studio which has examined many GitHub repositories for best coding methods etc, IntelliSense can also be used to find bugs in the code similar to Clever-Commit. Head of French division, Mozilla, Sylvestre Ledru said in a blog post: “With a new release every 6 to 8 weeks, making sure the code we ship is as clean as possible is crucial to the performance people experience with Firefox. The Firefox engineering team will start using Clever-Commit in its code-writing, testing and release process. We will initially use the tool during the code review phase, and if conclusive, at other stages of the code-writing process, in particular during automation. We expect to save hundreds of hours of bug riskiness analysis and detection. Ultimately, the integration of Clever-Commit into the full Firefox developer workflow could help catch up to 3 to 4 out of 5 bugs before they are introduced into the code.” Clever-Commit was originally displayed by Ubisoft as Commit Assistant last year. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Open letter from Mozilla Foundation and other companies to Facebook urging transparency in political ads The State of Mozilla 2017 report focuses on internet health and user privacy
Read more
  • 0
  • 0
  • 2467

article-image-lloyds-banks-online-services-which-were-down-due-to-dnssec-issues-have-been-restored
Savia Lobo
12 Feb 2019
2 min read
Save for later

Lloyds Bank’s online services which were down due to DNSSEC issues have been restored!

Savia Lobo
12 Feb 2019
2 min read
Yesterday, many customers of the British commercial Bank, Lloyds, faced a lot of problems leaving them unable to login into their accounts on the bank’s website. According to Lloyds, it only affected Lloyd’s customers and not the group’s Halifax and Bank of Scotland brands, and has also not affected app logins. Unhappy with the glitch, the customers started complaining on social media on Sunday evening, “Is your banking app/website down for maintenance? My internet is fine but I cannot access my bank on any device.” https://twitter.com/StuartJAMJ/status/1094642514429206528 Addressing to one of the user issues on Twitter, a Lloyds Banking Group spokesperson said, “We are currently aware that some customers may be experiencing intermittent issues when trying to access their online banking service this morning.” https://twitter.com/rachel_bassy/status/1094887320476663808 https://twitter.com/AskLloydsBank/status/1094905886672384000 According to the Lloyds website, its internet banking platform was undergoing maintenance between midnight and 6 am on Sunday but a spokeswoman was unable to confirm whether there was any link to the later outage. Kevin Beaumont, a cybersecurity writer, tweeted that “the Lloyd's Bank have invalid DNSSEC setup and invalid serials. Also, Google's DNS servers (8.8.8.8 etc), widely used by people, reject the lookups as a result.” https://twitter.com/GossiTheDog/status/1094916766680137733 The bank replied to Beaumont neither confirming nor denying his views by saying, “We're aware that some of our customers are experiencing problems accessing our online services. We're working to resolve the issue as quickly as possible, and apologise for any inconvenience caused.” https://twitter.com/AskLloydsBank/status/1094918360909918209 https://twitter.com/GossiTheDog/status/1094940073773142016 The bank has been replying to other customers on Twitter and apologizing for the inconvenience. They have also assured the customers that the issue will be resolved soon. https://twitter.com/SarahLJx/status/1094886710952030208 https://twitter.com/AskLloydsBank/status/1094906298775285760 According to the latest update, Lloyds bank tweeted that “the intermittent issues some customers experienced yesterday with our Internet Banking service has been resolved.” https://twitter.com/AskLloydsBank/status/1095243108734906368 To know more about this news, visit Lloyds Bank’s Twitter thread. Wells Fargo’s online and mobile banking operations suffer a major outage Mandrill email API outage unresolved; leaving users frustrated Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records  
Read more
  • 0
  • 0
  • 2511

article-image-uber-releases-ludwig-an-open-source-ai-toolkit-that-simplifies-training-deep-learning-models-for-non-experts
Natasha Mathur
12 Feb 2019
3 min read
Save for later

Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts

Natasha Mathur
12 Feb 2019
3 min read
Uber released a new, open source Deep Learning toolbox called Ludwig, yesterday, to make training and testing of the deep learning models easier for non-experts. “By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures rather than data wrangling”, states the Uber team. Uber had been working on developing Ludwig for the past two years to simplify the use of Deep Learning models in projects. Uber has used the toolkit for several of its own projects such as its Customer Obsession Ticket Assistant (COTA), information extraction from driver licenses, food delivery time prediction, etc. Ludwig comes with a set of model architectures that can be combined to develop an end-to-end model for a given use case. Main highlights of Ludwig No need to write code: With Ludwig, you don’t need any coding skills in order to train a model and use it for obtaining predictions. Generality: Ludwig makes use of a new data type-based approach for the deep learning model design making the tool available for a variety of use cases. Flexibility: Ludwig offers extensive control to its users over model building and training, making it very user-friendly, especially for the beginners. Extensibility: Easy to add new model architecture and new feature data types. Understandability: There are standard visualizations offered in Ludwig to helps users understand the performance of their deep learning models and compare their predictions. Apart from being flexible and accessible, Ludwig comes with additional benefits for non-programmers including a set of command line utilities for training, testing models, and obtaining predictions. It also offers a programmatic API, allowing users to train and use a model with only a few lines of code. Moreover, Ludwig comprises other tools that help with evaluating models, comparing the performance and predictions of these models via visualizations as well as extracting model weights and activations from them. To help its users train a deep learning model, Ludwig provides a tabular file (like CSV) that contains the data and a YAML (YAML Ain't Markup Language) configuration file (specifies columns of the tabular file as input features and output target variables). The simplicity of this configuration file helps with faster prototyping and considerably brings down the hours of coding to just a few minutes. Users can also visualize their training results in Ludwig. A result directory consisting of the trained model with its hyperparameters, as well as summary statistics of the training process, are created in Ludwig. Users can further visualize these results with the help of several visualization options from the visualization tool. “We decided to open source Ludwig because we believe that it can be a useful tool for non-expert machine learning practitioners and experienced deep learning developers and researchers alike”, states the Uber team. For more information, check out the official Ludwig blog post. Uber releases AresDB, a new GPU-powered real-time Analytics Engine Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 3016
article-image-the-u-s-just-launched-the-american-ai-initiative-to-prioritize-ai-research-and-development
Bhagyashree R
12 Feb 2019
4 min read
Save for later

The U.S. just launched the American AI Initiative to prioritize AI research and development

Bhagyashree R
12 Feb 2019
4 min read
On Monday, the US president signed an executive order that introduces a program named “American AI Initiative”. With this initiative, the US government joins the list of governments that have issued a broad AI strategy including China, France, Canada, and South Korea. What is the aim of this American AI Initiative? Though no specific details have been revealed, officials said a more detailed plan will be shared over the next six months. A fact sheet issued by the White House listed the following key aspects the federal agencies will be responsible for to boost the position of US in the AI industry: Providing funds, programs, and data to support AI research and commercialization. Agencies in areas such as health and transportation will be urged to share data while maintaining privacy, that could be used in AI research. Taking steps towards preparing US workers to adjust to jobs that have been automated by AI or will be automated in the future. Under this initiative, the federal agencies will need to prioritize AI when allocating their R&D projects. Federal agencies will also require to develop a set of national regulatory standards for AI, which will address the various ethical issues caused by AI. Trump in a statement said, "Continued American leadership in Artificial Intelligence is of paramount importance to maintaining the economic and national security of the United States.” Many speculate that this national AI strategy is the result of the bitter trade war between the US and China and also due to the concerns around people losing their jobs because of the advancements in AI. Lynne Parker, the Assistant Director of AI at the White House Office of Science and Technology Policy, stated, “AI has really become a transformative technology that’s changing industries, markets, and society. There are a number of actions that are needed to help us harness AI for the good of the American people.” How AI experts, tech companies, and others are reacting to this initiative? This initiative saw mixed reactions from the public, industry leaders, AI experts, and policymakers. Roger Wicker, the Senate Commerce Committee Chairman, stated, “Artificial Intelligence has great potential to benefit the American people while enhancing our nation’s security and growing our economy. Today’s executive order will ensure that the United States remains a leader in emerging technologies and scientific development.” Tech companies think that this is definitely a step forward to a comprehensive national strategy on AI. In a statement to The Hill, IBM said, “Today’s order is a critical step in the launch of America’s national AI strategy. We commend and welcome the order’s emphasis on specific priorities that IBM had recommended, such as the ethical advancement of AI, expanding 21st-century apprenticeship opportunities to build an AI-ready workforce, leveraging government data to accelerate AI development that can deliver shared prosperity, and prioritizing AI in federal research and development.” One of the Hacker News users said, “I hope it's an incredibly significant amount of money considering how important of an issue this is. Out of everything else that's going on, AI will have the greatest impact on the future.” While others think that this is just a PR by the Trump administration, “It won't mean anything unless it is backed by piles and piles money for research funds, which I highly doubt. I think this is more a PR show by trump to make it appear as if the US is countering China.” Kate Crawford, co-director of AI Now research institute, told the Science, “The White House’s latest executive order correctly highlights AI as a major priority for U.S. policymaking.” But she is concerned about the fact that the executive order is mainly focused on industry and lacks input from academia and civic leaders. Erik Brynjolfsson, Director of the MIT Center for Digital Business, said along with driving AI research and development, US policymakers must also take into account the values and how the technology is implemented. He said, “If we want Western values to thrive, we need to play a role in maintaining and even extending the technological strength we’ve long had.” Read more in detail about the American AI Initiative in the fact sheet shared by the White House. EU legislators agree to meet this week to finalize on the Copyright Directive The US to invest over $1B in quantum computing, President Trump signs a law Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology
Read more
  • 0
  • 0
  • 1935

article-image-reddit-has-raised-300-million-in-a-new-funding-round-led-by-chinas-tencent
Sugandha Lahoti
12 Feb 2019
3 min read
Save for later

Reddit has raised $300 million in a new funding round led by China’s Tencent

Sugandha Lahoti
12 Feb 2019
3 min read
Yesterday, Reddit raised $300 million in new Series D funding from investors led by China’s Tencent. The company now values at $3 billion, in the lines of tech giants like Google and Facebook. Until now, Reddit has received $550 million in total funding. Other investors include Sequoia, Fidelity, Andreessen Horowitz, Quiet Capital, VY and Snoop Dogg. Reddit CEO Steve Huffman, in an interview with CNBC said, "One of the things that's been very important to us is that we can now assure advertisers that you are going to have a positive experience on Reddit and potentially even a new experience, a new way of connecting with customers." The investment makes sense because video game is one of the more popular categories at Reddit, and Tencent invests a lot in video game makers. Currently, Tencent owns 40 percent of "Fortnite" creator Epic Games. "They are investors in lots of video games companies," Huffman said. "And video games are one category that's really popular on Reddit." With this investment round, Huffman said he hopes to compete in online advertising with Facebook and Google. "When we are talking about competing for ad dollars, of course, we are talking about Facebook and Google, who take up the vast majority of ad spend." Not all is good however, as some Redditors are already protesting the funding by Tencent, considering it is Chinese, and Reddit is blocked in China, for allowing users to have a free, unedited speech. People are also speculating how China might have a chance to take over the US in the cold war. A comment on Hacker news reads, “Tencent, a Chinese firm has meaningful ownership of American youth. 12% Snap 7.5% Spotify 40% Epic Games 100% Riot Games 100% Supercell 5% Reddit. Hollywood has also been moving in this direction, with a lot of Chinese investment in the studios, and blockbusters adding special scenes with Chinese actors and locations. What does it mean for America when it's no longer the owner or creator of culture? It's historically one of our largest (and most important) exports. I'm not sure if that claim to fame is a net positive for the world, but the changing of this guard will certainly have a local impact.” Some other users expressed concerns, if this move may slowly start to repress anti-China content on Reddit. “The only issue I can see from ownership is if they start to censor the platform. I doubt they'll do any overt censoring (eg. "no talking about what happened at Tiananmen Square in 1989), but I wouldn't be surprised if they do subtle manipulation like silently deemphasizing anti-China content, or emphasizing anti-western content (eg. infighting, failure of western democracy). The latter probably would even be good for the site (in terms of engagement) as outrage drive clicks.”, reads a comment on Hacker News. Reddit posts an update to the FireEye’s report on suspected Iranian influence operation Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’ What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019.
Read more
  • 0
  • 0
  • 1464