Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-postgresql-group-releases-an-update-to-9-6-10-9-5-14-9-4-19-9-3-24
Natasha Mathur
10 Aug 2018
2 min read
Save for later

PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24

Natasha Mathur
10 Aug 2018
2 min read
The PostgreSQL team released an update yesterday to the versions 10.5, 9.6.10, 9.5.14, 9.4.19, 9.3.24 of its database system. The latest update focuses on fixing two security issues and bugs detected in the past three months. PostgreSQL is a popular open source relational database management system that offers reliability, correctness, robustness, and performance measures. It runs on all major operating systems such as Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. Let’s discuss the highlights of the recent major update. Security Issues The recent release focuses on fixing two major security issues: Certain host connection parameters defeat client-side security defenses There was an internal issue in Libpq, a client connection API for PostgreSQL. When trying to reconnect, all the connection state variables couldn’t be reset. Specifically, the state variable that helps determine whether or not a password is needed for a connection would not be reset. This allowed the users of features requiring libpq, namely, the dblink or postgres_fdw extensions, to login to servers they should not be able to access. To check if your database has either extension installed, run the following  from your PostgreSQL shell: \dx dblink|postgres_fdw Memory disclosure and missing authorization in insert An attacker can issue CREATE TABLE. This allows the arbitrary bytes of server memory to be easily read with the help of an upsert (INSERT ... ON CONFLICT DO UPDATE) query. By default, any user can easily exploit that. A user with specific INSERT privileges and an UPDATE privilege on at least one column in a given table is capable of updating other columns with the help of a view and an upsert query. Major Bug Fixes There was an issue in VACUUM,  leading to data corruption in certain system catalog tables, this has been fixed with the latest update. There are a lot of performance improvements made allowing to replay the write-ahead logs. SQL-standard FETCH FIRST syntax has been fixed to allow parameters ($n), as the standard expects. Performance regression related to POSIX semaphores has been fixed for multi-CPU systems running Linux or FreeBSD. libpq has been fixed for cases where hostaddr is used. To get complete information on other bug fixes and improvements, check out the official PostgreSQL release notes. Handling backup and recovery in PostgreSQL 10 [Tutorial] How to perform data partitioning in PostgreSQL 10 6 index types in PostgreSQL 10 you should know  
Read more
  • 0
  • 0
  • 2372

article-image-googles-new-facial-recognition-patent-uses-your-social-network-to-identify-you
Melisha Dsouza
10 Aug 2018
3 min read
Save for later

Google’s new facial recognition patent uses your social network to identify you!

Melisha Dsouza
10 Aug 2018
3 min read
Google is making its mark in facial recognition technology. After two successful forays in facial identification patents in August 2017 and January 2018, Google is back with another charter. This time its huge and plans to use machine-learning technology for facial recognition of publicly available personal photos on the internet. It’s no secret that Google can crawl trillions of websites at once. Using this as an advantage, the new patent allows Google to source pictures and identify faces from personal communications, social networks, collaborative apps, blogs and much more! Why is facial recognition gaining importance? The internet is buzzing with people clicking and uploading their images. Whether it be profile pictures or group photographs, images on social networks is all the rage these days.  Apart from this, facial recognition also comes in handy while performing secure banking and financial transactions. ATMs and banks use this technology to make sure the user is who he/she says they are. From criminal tracking to identifying individuals in huge masses of people- facial recognition has applications everywhere! Clearly, Google has been taking full advantage of this tech. First, in the “Reverse Image Search” system, that allowed users to upload an image of a public figure to Google, the results would be a “best Guess” about who appears in the photo. And now, with the new patent, users can identify photos of less famous individuals. Imagine uploading a picture of a fifth-grade friend and coming back with the result of his/her email ID or occupation or for that matter, where they lives! The Workings of the Google Brain The process is simple and straightforward. First, the user uploads a photo, screenshot or scanned image The system analyzes the image and comes up with  both visually similar, and a potential match using advanced image recognition Google will find the best possible match  based partially on the data it pulled from your social accounts and other collaborative apps plus the aforementioned data sources The process of recognizing an image adopted by Google Source: CBInsights While all of this does sound exciting, there is a dark side left to be explored. Imagine you are out going about your own business. Someone who you don't even know happens to click your picture. This could later be used to find out all your personal details like where you live, what you do for a living, what your email address. All because everything is available on your social media accounts and on the internet these days!  Creepy much? This is where basic ethics and privacy concerns come into play. The only solace here is that the patent states, in certain scenarios, a person would have to opt-in to have his/identity appear in search results. Need to know more? Check out the perspective on thenextweb.com. Admiring the many faces of Facial Recognition with Deep Learning Google’s second innings in China: Exploring cloud partnerships with Tencent and others Google’s Smart Display – A push towards the new OS, Fuchsia
Read more
  • 0
  • 46
  • 9356

article-image-facebook-patents-its-news-feed-filter-tool-to-provide-more-relevant-news-to-its-users
Natasha Mathur
09 Aug 2018
3 min read
Save for later

Facebook patents its news feed filter tool to provide more relevant news to its users

Natasha Mathur
09 Aug 2018
3 min read
Facebook has recently been granted a patent titled “Selection and Presentation of News Stories Identifying External Content to Social Networking System Users” on July 31st, 2018. It aims to analyze the user data to curate a personalized news feed for the users. This will also include providing users with control over the kind of news they want to see. Facebook wants to add a Filter option in its news feed. This will make it easier for the users to find relevant news items. As per the patent application, “the news stories may be filtered based on filter criteria allowing a viewing user to more easily identify new stories of interest”. For instance, the filter can be added to view stories associated with either some other user or some news source. You can also add a keyword filter to get all the stories related to that specific keyword.  Facebook news feed filter tool   There are a lot of groups and pages on Facebook which helps reflect the user’s interests. The kind of content that the user posts also says a lot about his/her preferences. As there is a lot of user data present, Facebook automatically analyzes the user’s profile to optimize the news feed as per the choice of the user. There is also a ranking criterion involved when it comes to filtering news feed. The patent reads “news stories are scored and ranked based on their scores. News stories may be ranked based on the popularity of the news story among users of the social networking system. Popularity may be based on the number of views, likes, comments, shares or individual posts of the news story in the social networking system.” News stories can also be ranked based on the chronological order.   Facebook news feed filter tool patent Once Facebook is done analyzing the user profile, filtering the feed based on filter criteria, and ranking the stories based on the ranking criteria, a newly customized news feed will be generated and presented to the user. Facebook has been taking measures to curb fake news from its feed. The news filter tool is expected to help further. It will prevent irrelevant and fake news from occurring on users’ news feed as the users can choose to see news only from trusted resources. In fact, Facebook recently acquired Bloomsbury AI to fight fake news. Additionally, the latest news sources, accounts, groups, and pages will also be recommended to users based on data analyzed. With so much data floating around on Facebook feeds, this patent idea seems like a much-needed one. There are no details currently on when or if this feature will hit the Facebook feed. What do you think about Facebook’s news feed filter tool patent? Let us know in the comments below. Facebook launched new multiplayer AR games in Messenger Facebook launches a 6-part Machine Learning video series Facebook open sources Fizz, the new generation TLS 1.3 Library  
Read more
  • 0
  • 0
  • 2836
Visually different images

article-image-singularitynet-and-mindfire-unite-talents-to-explore-artificial-intelligence
Prasad Ramesh
09 Aug 2018
3 min read
Save for later

SingularityNET and Mindfire unite talents to explore artificial intelligence

Prasad Ramesh
09 Aug 2018
3 min read
SingularityNET to collaborate with Mindfire to team up their best talents and work on something similar to Mindfire Mission 1. The mission was dedicated to “cracking the brain code”, to understand more about how the human brain works. They hope to combine their talents and work on AI services, education and also towards combining their blockchain tokens. SingularityNET is an AI solution platform powered by a decentralized protocol that lets anyone create, share, and monetize on its AI services. Mindfire is focused on understanding the building blocks of AI which forms a human level intelligence. Mindfire believes “The partnership between SingularityNET and the Mindfire Foundation will grow the talent pool of both entities and increase the productivity not only of AI services but also of the number of relevant insights in human-level artificial intelligence.“ Together they plan to target three key areas: Talent: Choose the best talents, leading minds from the pool in different AI disciplines AI services: This focuses on building a decentralized hub for AI services where the talents from Mindfire can work on and use SingularityNET’s platform AI education: Implementing practical courses and lectures in AI and its business applications and opportunities Ben Goertzel, founder of SingularityNET says, “The SingularityNET decentralized AI platform is open to any possible approach to AI or any complex systems”. Even though Ben’s approach is less “brain-focused” unlike Mindfire, he believes there is great potential for collaboration. Mindfire with SingularityNET has launched a call for applications for their successive missions, Mission-2 and Mission-3. These missions focus on prototype development including drones, robots, and other carrier systems for AI. Mission-2 is planned for November 11-16, 2018. There are 10 planned missions to come. Head on to their website to learn more and apply! Mindfire also announced the publication of a completely revised white paper, that is set for release on August 15, 2018. The revised white paper will detail the key functionality of the ERC20 protocol based token, MFT. It will also elaborate on Mindfire’s business model and outline the Mindfire token sale including a reward campaign. SingularityNET tweeted on July 31: https://twitter.com/mindfire_global/status/1024275219391959040 Mindfire’s talents can collaborate to create applications or products on the SingularityNET platform and leverage it. There’s also potential for SingularityNET to connect Mindfire with people building applications or doing research. SingularityNET can direct various AI problem areas at Mindfires talent. Ben believes they may be able to build mechanisms where the AGI token of SingularityNET can convert automatically to Mindfire’s MFT Token. The focus here is on the exchangeability of these tokens to promote the development of SingularityNET’s decentralized protocol. This is a future area of collaboration that can lead to a two way incentivization benefiting both companies. You can view Ben, CEO and Chief Scientist of SingularityNET express his collaborative visions in this YouTube video. To know more visit the official Mindfire blog. Attention designers, Artificial Intelligence can now create realistic virtual textures Top languages for Artificial Intelligence development 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 4501

article-image-uber-open-sources-its-large-scale-metrics-platform-m3-for-prometheus
Savia Lobo
08 Aug 2018
4 min read
Save for later

Uber open sources its large scale metrics platform, M3 for Prometheus

Savia Lobo
08 Aug 2018
4 min read
Yesterday, Uber Inc.,  open-sourced its robust and scalable metrics infrastructure, M3 for Prometheus, a popular monitoring and alerting solution. Uber has been using M3 for a long time to access metrics on their backend systems. However, by open sourcing M3 as a remote storage backend for Prometheus, Uber wants others in the broader community to benefit from their metrics platform. Prior to releasing M3, Uber released M3DB, the scalable storage backend for M3. M3DB is a distributed time series database that can be used for storing real-time metrics at long retention Along with M3, Uber also open sourced M3 Coordinator, a bridge that users can deploy to access the benefits of M3DB and Prometheus. The M3 Coordinator performs downsampling, ad hoc retention, and aggregation of metrics using retention and rollup rules. This helps in applying specific retention and aggregations to subsets of metrics on the go. The rules of the process are stored in etcd, which runs embedded in the binary of an M3DB seed node. M3 for Prometheus Although Prometheus is a popular monitoring and alerting solution, its scalability and durability is limited by single nodes. The M3 metric platform provides a turnkey, scalable, and configurable multi-tenant store for Prometheus metrics. Source: Uber Engineering Uber, before using M3, emitted metrics to a Graphite stack, which stored them using the Whisper file format in a sharded Carbon cluster. Uber then made use of Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. However, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. Thus, this solution was not worth continuing as Uber kept expanding. This led them to build M3, a system which provides fault-tolerant metrics ingestion, storage, and querying as a managed platform. Released in the year 2015, M3 now houses over 6.6 billion time series. Features of M3 include: It optimizes every part of the metrics pipeline. This gives engineers an improved storage and results in lesser hardware usage. M3 ensures that the data is as highly compressed to reduce hardware footprint. This further optimizes Gorilla’s TSZ compression to compress float64 values, known as M3TSZ compression. Maintains a lean memory footprint for storage to avoid memory becoming a bottleneck since a significant portion of each data point can be “write once, read never.” To speed up access time, a Bloom filter and index summary per shard time window block in mmap’d memory is available. This allows ad-hoc queries of up to 100,000 unique time series in a single query over long retention periods (in some cases, spanning years of retention). With M3, one can avoid compactions where possible, including the downsampling path. This will further increase the utilization of host resources for more concurrent writes and provide steady write/read latency. One can also use a native design for time series storage that does not require vigilant operational attention to run with a high write volume. The M3 architecture The M3 architecture M3 architecture includes a single global view of all metrics With such a global view, upstream consumers need not navigate routing. This increases the overall simplicity of metrics discoverability. For workloads that failover applications between regions or workloads sharded across regions, the single global view makes it much easier to sum and query metrics across all regions in a single query. This lets users see all operations of a specific type globally, and look at a longer retention to view historical trends in a single place. How can one achieve the single global view? To achieve this single pane view, metrics are written in M3 to local regional M3DB instances. In this setup, replication is local to a region and can be configured to be isolated by availability zone or rack. Queries fan out to both the local region’s M3DB instances and coordinators in remote regions where metrics are stored, returning compressed M3TSZ blocks for matched time series wherever possible. Uber engineers plan to further upgrade M3 to push query aggregations to remote regions to execute before returning results, as well as to the local M3DB storage node wherever possible. Read more about M3 in detail in Uber Engineering official blog post. China’s Baidu launches Duer OS Prometheus Project to accelerate conversational AI Log monitoring tools for continuous security monitoring policy [Tutorial] Monitoring, Logging, and Troubleshooting
Read more
  • 0
  • 0
  • 4329

article-image-nec-corps-neoface-to-bring-facial-recognition-to-2020-tokyo-olympics
Prasad Ramesh
08 Aug 2018
3 min read
Save for later

NEC Corp’s NeoFace to bring facial recognition to 2020 Tokyo Olympics

Prasad Ramesh
08 Aug 2018
3 min read
This year, in the FIFA World Cup researchers and scientists, tried to use Artificial Intelligence (AI) to predict the outcomes of all 64 matches. That did not work out well, accounting to probability and human nature that cannot be predicted. Now we see another implementation of AI in a major global sports event. This time it’s not to predict outcomes, but to identify players with facial recognition. In 2020, facial recognition will be used for the first time widely in an Olympic event to identify athletes. The Japanese IT firm NEC Corp will provide the facial recognition system. The facial recognition system will also be used in the 2020 Paralympics;  it was tested in the Rio 2016 Olympics. The system is built around an AI engine called NeoFace. In addition to athletes, the system will be used to identify volunteers, media, and other staff. It will be used to identify around 300,000 people across more than 40 venues. The Olympics will begin on July 24, 2020. People attending the event are expected to submit their data in advance before the Olympics start. It will be approved and stored in a database and used to identify players before entry. The system will link the person’s facial data with an IC card that will be carried by them. So entry would be permitted only if the facial data stored in the database matches the data stored in the IC carried by the person. Tokyo 2020 has security challenges since venues are not that large. This will result in long wait times before the players can get into the venues. With the summer heat in Tokyo, this presents a problem for the players. The events will be spread out across the metropolitan area, and people will have to authenticate themselves at every entry. The NeoFace system is introduced to address these problems in the Tokyo Olympics venues. The facial recognition system is aimed at strengthening security and minimizing waiting times for athletes. NeoFace will also help with identifying forged ID cards and help athletes avoid the stress of waiting in long lines for identification. NEC has substantial experience in the facial recognition field and their technology has been used at airports for several years. The Tokyo Olympics 2020 may be the Olympics event with most security implemented yet. For more information, you can check out the coverage by Reuters. Read Next: Microsoft’s Brad Smith calls for facial recognition technology to be regulated Amazon is selling facial recognition technology to police Admiring the many faces of Facial Recognition with Deep Learning
Read more
  • 0
  • 0
  • 1913
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-facebook-launches-machine-learning-video-series
Sugandha Lahoti
08 Aug 2018
2 min read
Save for later

Facebook launches a 6-part Machine Learning video series

Sugandha Lahoti
08 Aug 2018
2 min read
Facebook has launched a 6-part video series dedicated to providing practical tips about how to apply machine-learning capabilities to real-world problems. The Facebook Field Guide to Machine Learning is developed by the Facebook ads machine learning team. The development process of an ML model Source: Facebook research The video series will cover how the entire development process works. This includes what happens during the training of machine learning models and what happens before and after the training process in each step. Each video also includes examples and stories of non-obvious things that can be important in an applied setting. The video series breaks down the machine learning process into six steps: Problem definition It is necessary to have the right set up before you go about choosing an algorithm. The first video, Problem definition, talks about how to best define your machine learning problem before going into the actual process. You save almost a week’s worth of time by spending just a few hours at the definition stage. Data This tutorial teaches developers how to prepare the training data. The training data is a powerful variable to create high-quality machine learning systems. Evaluation The third lesson talks about the steps to evaluate the performance of your machine learning model. Features The fourth tutorial explains examples of various features such as categorical, continuous and derived features. It also describes how to choose the right feature for the right model. The video also talks about changing features, feature breakage, leakage, and coverage. Model The next lesson describes how to choose the right machine learning model for your data and find the algorithm to implement and train that model. It also offers tips and tricks for picking, tuning and comparing models. Experimentation The final tutorial covers experimentation, which is about making your experiments actionable. A large part of the tutorial is dedicated to the difference between offline and online experimentation. The entire video series is available on the Facebook blog for you to watch. Microsoft start AI School to teach Machine Learning and Artificial Intelligence Soft skills every data scientist should teach their child Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 3072

article-image-ledgerconnect-a-blockchain-app-store-by-ibm-cls-barclays-citi-and-7-other-banks-in-the-trials
Prasad Ramesh
07 Aug 2018
3 min read
Save for later

LedgerConnect: A blockchain app store by IBM, CLS, Barclays, Citi and 7 other banks is being trialled

Prasad Ramesh
07 Aug 2018
3 min read
Blockchain is an open decentralized database. It's the underlying technology for the popular cryptocurrency, Bitcoin. Now, banks and other financial institutions want to apply it to financial transactions. The recently launched IBM Blockchain platform, LedgerConnect is aimed at the financial industry and banking sectors. What is IBM blockchain? IBM, with CLS, a foreign exchange financial group, launched LedgerConnect a week ago. It is a proof of concept blockchain platform designed for companies that provide financial services. Its aim is to apply blockchain technology to a number of challenge areas that are currently not that fast. They also have challenges like tracking a paperwork trail, know your customer (KYC) processes, collateral management, and trades. So far, nine financial companies, including banks like Barclays and Citi, are involved in the trials. With everything being online, delays will likely be reduced by a large margin. According to an IBM report: “With IBM Blockchain, banks can create secure, low-cost and high volume cross-border payments without sacrificing margins.” Why is it important? The IBM website states that 91% of banks are investing in blockchain solutions by 2018. Also, 66% of institutions expect to be in production and running at scale with blockchain. All the transactions being on the same network makes everything much easier. An added advantage is that blockchain is known to be tamper-proof. So the overall security and speed will increase a lot. While the blockchain used in Bitcoin is public, the one used in large companies would be private. Makes sense as you wouldn’t want everyone’s banking and transaction info publically available everywhere. To further understand why blockchains are great for banking applications, read our article, 15 ways to make Blockchains scalable, secure and safe!. Impact of blockchain in banking and finance In the current day, banks work with manual transactions and a considerable amount of paperwork. So transactions like trades and loans take a lot of time, amounting to delays. Cross-border payments are costly and time-consuming. Moreover, making customers provide identification frequently is tedious and decreases customer satisfaction. IBM blockchain aims to address all these pain areas. As the platform is online, blockchain based smart-contracts automatically store information about the transactions in real-time. As a result, there won’t be discrepancies in the data shown in different locations. Transactions on blockchain are identified uniquely with a hash. KYC info will also be more secure since it will be stored in the blockchain that is known for being tamper-proof. Source: IBM infographic Source: IBM infographic LedgerConnect tries to create a network of multiple providers that can provide their applications to be deployed on this platform. Large banks and organizations can come and use those applications creating an app store like ecosystem. The applications in the LedgerConnect “store” are built on Hyperledger Fabric. This project is expected to launch in early 2019 but needs approvals from central banks. To know more about Hyperledger, take a look at our article Hyperledger: The Enterprise-ready Blockchain Implementing blockchain in finance and banking will enable simplicity and operational efficiency while also enhancing the end customer experience. Many of the current challenges can be solved and the time involved can be brought down to minutes from hours. To know more visit the IBM website. Read next: Google Cloud Launches Blockchain Toolkit to help developers build apps easily Oracle makes its Blockchain cloud service generally available Blockchain can solve tech’s trust issues – Imran Bashir
Read more
  • 0
  • 0
  • 3074

article-image-facebook-apple-spotify-pull-alex-jones-content
Richard Gall
06 Aug 2018
4 min read
Save for later

Facebook, Apple, Spotify pull Alex Jones content

Richard Gall
06 Aug 2018
4 min read
Social media platforms have come under considerable criticism for allowing controversial media outlets for as long as 'fake news' has been in the public lexicon. But over the past week, major actions against Alex Jones' content channels suggests that things might be changing. Apple has pulled 5 out of Jones 6 podcasts from iTunes (first reported by Buzzfeed News), while hours later on Monday 6 August Facebook announced it was removing four of Jones' pages for breaching the platform's content guidelines. Alongside Facebook's and Apple's actions, Spotify also made the decision to remove Jones' content from the streaming platform and revoke his ability to publish "due to repeated violations of Spotify's prohibited content policies" according to a Spotify spokesperson. This news comes just weeks after YouTube removed a number of Infowars videos over 'hate speech' and initiated a 90 day ban on Infowars broadcasting live via YouTube. Unsurprisingly, the move has come under attack from those who see the move as an example of censorship. Even people critical of Jones' politics have come out to voice their unease: https://twitter.com/realJoeBarnes/status/1026466888744947721 However, elsewhere, the move is viewed positively with commentators suggesting social media platforms are starting to take responsibility for the content published on their systems. https://twitter.com/shannoncoulter/status/1025401502033039362 One thing that can be agreed is that the situation is a little confusing at the moment. And although it's true that it's time for Facebook, and other platforms to take more responsibility for what they publish, there are still issues around governance and consistency that need to be worked through and resolved. Facebook's action against Alex Jones - a recent timeline On July 27, Alex Jones was hit with a 30 day suspension by Facebook after the company removed 4 videos from its site that contravened its content guidelines. However, as numerous outlets reported at the time, this ban only effected Jones personally. His channels (like The Alex Jones Channel and Infowars) weren't impacted. However, those pages that weren't hit by Jones' personal ban have now been removed by Facebook. In a post published August 6, Facebook explained: "...we removed four videos on four Facebook Pages for violating our hate speech and bullying policies. These pages were the Alex Jones Channel Page, the Alex Jones Page, the InfoWars Page and the Infowars Nightly News Page..." The post also asserts that the ban is about violation of community standards not 'false news'. "While much of the discussion around Infowars has been related to false news, which is a serious issue that we are working to address by demoting links marked wrong by fact checkers and suggesting additional content, none of the violations that spurred today’s removals were related to this." Apple's action against Alex Jones Apple's decision to remove 5 of Alex Jones podcasts is, according to Buzzfeed News, "one of the largest enforcement actions intended to curb conspiratorial news content by a technology company to date." Like Facebook, Apple's decision was based on the content's "hate speech" rather than anything to do with 'fake news'. An Apple spokesperson explained to Buzzfeed News: "Apple does not tolerate hate speech, and we have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users... Podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming. We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.” Spotify's action against Alex Jones' podcasts Spotify removed all episodes of The Alex Jones Show podcast on Monday 6 August. This follows the music streaming platform pulling a number of individual episodes of Jones' podcast at the beginning of August. This appears to be a consequence of Spotify's new content guidelines, updated in May 2018, which prohibits "hate content." The takeaway: there's still considerable confusion over content What this debacle shows is that there's confusion about how social media platforms should deal with content that it effectively publishes. Clearly, the likes of Facebook are trying to walk a tightrope that's going to take some time to resolve. The broader question is not just do we want to police the platforms billions of people use, but how we do that as well. Arguably, social media is at the center of today's political struggles, with many of them unsure how to manage the levels of responsibility that have landed on on their algorithms. Read next Time for Facebook, Twitter and other social media to take responsibility or face regulation Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer  
Read more
  • 0
  • 0
  • 2392

article-image-openai-five-bots-beat-a-team-of-former-pros-at-dota-2
Natasha Mathur
06 Aug 2018
3 min read
Save for later

OpenAI Five bots beat a team of former pros at Dota 2

Natasha Mathur
06 Aug 2018
3 min read
Back in June, OpenAI Five, the artificial intelligence bot team, had smashed amateur humans in the video game Dota 2.  But, this time it set a completely different standard by beating semi-professional players at the Dota 2 game, yesterday. This was part of an effort to benchmark the progress of the bots so far as the openAI team plans to beat a team of top professionals at The International Dota 2 championship, taking place from August 20 to 25. As mentioned in the OpenAI blog, “Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota’s annual $40M prize pool (the largest of any esports game)”. It requires its players to have fast-twitch reflexes, strong knowledge of game strategies, along with solid teamwork. OpenAI Five fight as a group and consists of five neural networks. The game started off with OpenAI Five playing warm-up games with the audience. Later, the OpenAI Five team played a three-game series, against a group of humans, that included former Dota 2 professionals and casters, Merlini, Fogged, Cap, and Blitz. The OpenAI Five performed really well in the first two games. For the final game, the OpenAI team let the audience select their team of five heroes, which handicapped the bots’ chances of winning. Humans won the last round, finishing the series with a score of 2-1. According to the OpenAI blog, “OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores”. Proximal Policy Optimization ( PPO ) refers to a new class of reinforcement learning algorithms. It has become the default algorithm of choice for OpenAI as it is easy to implement and performs well. It involves “computing an update at each step that minimizes the cost function”. The OpenAI team had made small changes to their neural network bot last month. This included increasing its reaction time and using new strategies. This is not the first time when a computer has beaten human beings in games. From computers beating humans at chess to computers winning debates against humans, the chances of OpenAI Five beating professionals at The International does not seem that low. But what has set apart OpenAI Five’s achievement from the rest is its ability to optimize strategies well in a team as opposed to simply learn to master strategies as individual players. If you enjoyed reading this, be sure to check out the entire account on twitch, Dota 2 game: OpenAI Five vs Humans. Alibaba introduces AI copywriter AI beats Chinese doctors in a tumor diagnosis competition Meet CIMON, the first AI robot to join the astronauts aboard ISS
Read more
  • 0
  • 0
  • 2531
article-image-tesla-is-building-its-own-ai-hardware-for-self-driving-cars
Richard Gall
02 Aug 2018
3 min read
Save for later

Tesla is building its own AI hardware for self-driving cars

Richard Gall
02 Aug 2018
3 min read
Elon Musk revealed yesterday that Tesla is developing its own hardware in its bid to bring self-driving cars to the public. Up to now Tesla has used Nvidia's Drive Platform, but this will be replaced by 'Hardware 3,' which will be, according to Tesla at least, the 'world’s most advanced computer for autonomous driving.’ The Hardware 3 chip has been in the works for a few years now, with Jim Keller joining Tesla from chip manufacturer AMD back in 2016, and Musk confirming the project in December 2017. Keller has since left Tesla, and the Autopilot project - Tesla's self-driving car initiative - is now being led by Pete Bannon. "The chips are up and working, and we have drop-in replacements for S, X and 3, all have been driven in the field," Bannon said. "They support the current networks running today in the car at full frame rates with a lot of idle cycles to spare." Why has Tesla developed its own AI hardware? By developing its own AI hardware, Tesla is able to build the solutions tailored to its needs. It means it isn't relying on others - like Nvidia, say - to build what they need. Bannon explained "nobody was doing a bottoms-up design from scratch." By bringing hardware in-house, Tesla will not only be able to develop chips according to its needs, it will also make it easier to plan and move at its own pace. Essentially, it allows Tesla to take control of its own destiny. In the context of safety concerns around safe-driving cars, taking on responsibility for developing the hardware on which your machine intelligence will sit makes a lot of sense. It means you can assume responsibility for solving your own problems. How does Tesla's Hardware 3 compare with other chips? The hardware 3 chips are, according to Musk, 10x better than the current Nvidia GPUs. The current GPUs in Tesla's Autopilot system can analyze 200 frames per second. Tesla's new hardware can run on 2000 frames per second. This significant performance boost should, in theory, bring significant gains in terms of safety. What's particularly remarkable is that the new chip isn't actually costing Tesla any more than its current solution. Musk explained how the team was able to find such significant performance gains. "The key is to be able to run the neural network at a fundamental, bare metal level. You have to do these calculations in the circuit itself, not in some sort of emulation mode, which is how a GPU or CPU would operate. You want to do a massive amount of [calculations] with the memory right there.” The hardware is expected to roll out in 2019 and offered as a hardware upgrade to all owners of Autopilot 2.0 cars and up. Read next Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine DeepMind, Elon Musk, and others pledge not to build lethal AI Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech
Read more
  • 0
  • 0
  • 2827

article-image-plotly-releases-dash-daq-a-ui-component-library-for-data-acquisition-in-python
Natasha Mathur
02 Aug 2018
2 min read
Save for later

Plotly releases Dash DAQ: a UI component library for data acquisition in Python

Natasha Mathur
02 Aug 2018
2 min read
Plotly released Dash DAQ, a modern UI component library, which helps with data acquisition in Python, earlier this week. A data acquisition system (DAQ) helps collect, store, and distribute information. Dash DAQ is built on top of Plotly’s Dash (a Python framework used for building analytical web applications without requiring the use of JavaScript). Dash DAQ consists of 16 components. These components are used for building user interfaces that are capable of controlling and reading scientific instruments. To know more about each of their usage and configuration options, check out the official Dash DAQ components page. You can use Dash DAQ with Python drivers which are provided by instrument vendors. Alternatively, you can also write your own drivers with PySerial, PyUSB, or PyVISA. Dash DAQ is priced at $1980 as it is built with research labs in mind and is not suited currently for general python users. To install Dash DAQ, you have to purchase it first. After you make the purchase, a download page will automatically appear via which you can download it. Only one Dash DAQ library is allotted per developer. Here are the installation steps as mentioned in the official Dash DAQ installation page. Multiple apps of different variety have already been made using Dash DAQ. Here are some of the examples: Wireless Arduino Robot in Python, an app that wirelessly controls Sparki, an Arduino-based robot. Dash DAQ. Using Dash DAQ  for this app gives it clean, intuitive and virtual controls to build GUIs for your hardware. Robotic Arm in Python, an app that allows you to operate Robotic Arm Edge. Dash DAQ’s GUI components allow you to interface with all the robot’s motors and LED. Users can even do it via their mobile device, thereby enjoying the experience of a real remote control! Ocean Optics Spectrometer in Python, an app which allows users to interface with an Ocean Optics spectrometer. Here Dash DAQ offers interactive UI components which are written in Python allowing you to read and control the instrument in real-time. Apart from these few examples, there are a lot more applications that the developers at Plotly have built using Dash DAQ. plotly.py 3.0 releases 15 Useful Python Libraries to make your Data Science tasks Easier  
Read more
  • 0
  • 0
  • 6169

article-image-tensorflow-1-10-0-rc1-released
Sunith Shetty
01 Aug 2018
2 min read
Save for later

TensorFlow 1.10.0 RC1 released!

Sunith Shetty
01 Aug 2018
2 min read
After the recent release to TensorFlow 1.10.0 release candidate family, rc-0, the new release candidate rc-1 is out and available. Key highlights of this new version include major features and improvements to model training and evaluation, along with lots of bug fixes to the existing ecosystem. What’s new in TensorFlow 1.10.0 RC1? Modular changes The tf.lite runtime module now supports complex64 type Bigtable is a high-performance storage system which can help you store and serve training data. This new version will support the initial bigtable integration for tf.data With improved local run behavior in tf.estimator.train_and_evaluate function, there is no need to reload checkpoints for evaluation Now you can restrict the way workers and PS interact by setting device_filters in RunConfig class. Thus speeding up the training process and ensuring clean shutdowns in specific situations. However, if you want the workers and PS to communicate in order to complete the jobs, you will have to set customized session_options in RunConfig class. Feature additions and improvements Now you can find Distributions and Bijectors in TensorFlow Probability, which was initially found at tf.contrib.distributions. By the end of 2018 tf.contrib.distributions will be removed. New endpoints are added for existing TensorFlow symbols. Going forward these new endpoints are expected to be the preferred endpoints and may replace some of the existing endpoints in the future. You can find the new symbols added to the following modules: tf.debugging, tf.dtypes, tf.image, tf.io, tf.linalg, tf.manip, tf.math, tf.quantization, tf.strings. Breaking changes done to the ecosystem All the new prebuilt libraries are built against NCCL 2.2. They no longer include NCCL in the binary install. If you want to bring the complete usage of TensorFlow with multiple GPUs and NCCL you will need to upgrade it to NCCL 2.2. You can find the updated installation guide on Installing TensorFlow on Ubuntu and Install TensorFlow from Sources. From TensorFlow 1.11 release onwards, Windows builds will use Bazel. Hence this change will drop the official support for cmake. To get full details on the features list and bug fixes done in this release candidate, you can check out Tensorflow’s official release page on Github. Read more Why Twitter (finally!) migrated to Tensorflow How TFLearn makes building TensorFlow models easier Distributed TensorFlow: Working with multiple GPUs and servers
Read more
  • 0
  • 0
  • 2302
article-image-openai-reinforcement-learning-giving-robots-human-like-dexterity
Sugandha Lahoti
31 Jul 2018
3 min read
Save for later

OpenAI builds reinforcement learning based system giving robots human like dexterity

Sugandha Lahoti
31 Jul 2018
3 min read
Researchers at OpenAI have developed a system trained with reinforcement learning algorithms which is dexterous in-hand manipulation. Termed as Dactyl, this system can solve object orientation tasks entirely in a simulation without any human input. After the system’s training phase, it was able to work on a real robot without any fine-tuning. Using humanoid hand systems to manipulate objects has been a long-standing challenge in robotic control. Current techniques remain limited in their ability to manipulate objects in the real world. Although robotic hands have been available for quite some time, they were largely unable to utilize complex end-effectors to perform dexterous manipulation tasks. The Shadow Dexterous Hand, for instance, has been available since 2005 with five fingers and 24 degrees of freedom. However, it did not see large-scale adoption because of the difficulty of controlling such complex systems. Now OpenAI researchers have developed a system that trained control policies allowing a robot hand to perform complex in-hand manipulations. This systems shows unprecedented levels of dexterity and discovers different hand grasp types found in humans, such as the tripod, prismatic, and tip pinch grasps. It is also able to display dynamic behaviors such as finger gaiting, multi-finger coordination, the controlled use of gravity, and application of translational and torsional forces to the object. How does the OpenAI system work? First, they used a large distribution of simulations with randomized parameters to collect data for the control policy and vision-based pose estimator. The control policy receives observed robot states and rewards from the distributed simulations. It then learns to map observations to actions using RNN and reinforcement learning. The vision-based pose estimator renders scenes collected from the distributed simulations. It then learns to predict the pose of the object from images using a CNN, trained from the control policy. The object pose is predicted from 3 camera feeds with the CNN. These cameras measure the robot fingertip locations using a 3D motion capture system and give them to the control policy to produce an action for the robot. OpenAI blog You can place a block in the palm of the Shadow Dexterous hand and the Dactyl can reposition it into different orientations. For example, it can rotate the block to put a new face on top. OpenAI blog According to OpenAI, this project completes a full cycle of AI development that OpenAI has been pursuing for the past two years. “We’ve developed a new learning algorithm, scaled it massively to solve hard simulated tasks, and then applied the resulting system to the real world.” You can read more about Dactyl on OpenAI blog. You can also read the research paper for further analysis. AI beats human again – this time in a team-based strategy game OpenAI charter puts safety, standards, and transparency first Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block
Read more
  • 0
  • 0
  • 3621

article-image-microsoft-launches-quantum-katas-to-learn-quantum-language
Sugandha Lahoti
30 Jul 2018
2 min read
Save for later

Microsoft launches Quantum Katas, a programming project to learn Q#, its Quantum programming language

Sugandha Lahoti
30 Jul 2018
2 min read
Microsoft has announced Quantum Katas, a new portal for learning the quantum programming language Q#. This project contains a self-paced set of programming tutorials to teach interested developers the basic elements of Quantum computers as well as their Quantum programming language. Microsoft has been one of the forerunners in the Quantum computing race. Last year, Microsoft announced Q#, as a domain-specific programming language used for expressing quantum algorithms. Quantum Katas, as the name implies, has been derived from the popular programming technique Code Katas, which means an exercise to develop your skills through practice and repetition. Per Microsoft, each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task is based on code-filling; they may vary from one line at the start to sizable code fragments as the tutorial progresses. Developers are also provided reference materials to solve the tasks, both on quantum computing and on Q#. A testing framework is provided to validate solutions, thereby providing real-time feedback. Each kata covers one topic. The current topics are: Basic quantum computing gates. These tasks focus on the main single-qubit and multi-qubit gates used in quantum computing. Superposition. In these tasks, you learn how to prepare a certain superposition state on one or multiple qubits. Measurements. These tasks teach you to distinguish quantum states using measurements. Deutsch–Jozsa algorithm. In these tasks, you learn how to write quantum oracles which implement classical functions, and the Bernstein–Vazirani, and Deutsch–Jozsa algorithms. To use these katas, you need to install the Quantum Development Kit for Windows 10, MacOS or Linux.  The kit includes all of the pieces a developer needs to get started including the Q# programming language and compiler, a Q# library, a local quantum computing simulator, a quantum trace simulator and a Visual Studio extension. Microsoft Quantum Katas was developed after the results of the Q# coding contest that took place earlier this month, challenging more than 650 developers to solve Quantum related questions. You can read more about Quantum Katas on GitHub. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#?
Read more
  • 0
  • 0
  • 4499