Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-google-slams-trumps-accusations-asserts-its-search-engine-algorithms-do-not-favor-any-political-ideology
Melisha Dsouza
30 Aug 2018
3 min read
Save for later

Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology

Melisha Dsouza
30 Aug 2018
3 min read
Following the U.S President Donald Trump’s accusatory tweet on Tuesday morning, Google released a statement on the same day denying that it's algorithms favor liberal media outlets over right-wing ones. President Donald Trump went on to claim that Google search results for “Trump News” reports fake news. He accused the search engines’ algorithms of being rigged. Source: Twitter   Source: Twitter The 96% statistics were apparently taken from the results of a PJ Media investigation into Google News searches for the word "Trump".  The news came with a headline "96 Percent of Google Search Results for ‘Trump’ News Are from Liberal Media Outlets." Writer Paula Bolyard said she made the assessment after typing “Trump News” into Google’s ‘News’ tab across multiple computers, and then analyzed the top results against conservative journalist Sharyl Attkisson’s media bias chart. Trump has been tweeting since late July about discriminatory practices on Twitter and other social media sites more broadly and now his focus on Google is making rounds on the internet. Google’s spokesperson Riva Sciuto, addressed these accusations by stating that hundreds of improvements are done to the search giant’s algorithms each year to ensure that they surface high-quality content and the most relevant answers in response to users' queries. Google testifies that setting a political agenda was never entertained nor were its search results biased toward any political ideology These allegations- based on an analysis from the conservative online media outlet has little evidence behind them. “Google and Twitter and Facebook, they’re really treading on very, very troubled territory. And they have to be careful,” the president said later on Tuesday. “It’s not fair to large portions of the population.” Just 9 hours ago, the President posted another video with the caption #StopTheBias claiming that Google had promoted Barack Obama's state of the union address on its homepage but refused to do the same for Trump once he got elected as the president. The Trump administration on Tuesday said it might explore regulating Google, an effort that would challenge protections around free speech online. Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee Facebook, Twitter takes down fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Epic games CEO calls Google “irresponsible” for disclosing the security flaw in Fortnite Android Installer before patch was ready    
Read more
  • 0
  • 0
  • 2767

article-image-netflix-bring-in-verna-myers-as-new-vp-of-inclusion-strategy-to-boost-cultural-diversity
Natasha Mathur
30 Aug 2018
2 min read
Save for later

Netflix bring in Verna Myers as new VP of Inclusion strategy to boost cultural diversity

Natasha Mathur
30 Aug 2018
2 min read
Netflix announced yesterday that Verna Myers is joining the company as Vice President, inclusion strategy. In her new role, Myers will help with the implementation of strategies that reinforce cultural diversity and inclusion and equity into the varied aspects of Netflix's operations worldwide. https://twitter.com/VernaMyers/status/1034855768682422272 According to Jessica Neal, Netflix Chief Talent Officer, "Having worked closely with Vernā as a consultant on a range of organizational issues, we are thrilled that she has agreed to bring her talents to this new and important role”. Myers, a graduate of Harvard Law School, has spent the past two decades as the head at The Vernā Myers Company. Here, her major role involved providing consultation to major corporations and organizations regarding how to eradicate barriers based on race, ethnicity, gender, sexual orientation and other differences. She has also written several self-help books, been an active TED speaker, and contributed to reputed publications such as Refinery29, The Atlantic, Forbes, etc. Netflix deeply respects cultural diversity and fired its chief communications officer Jonathan Friedland, two months back, for using the N-word in a meeting. “As a global company dedicated to attracting the best people and representing a broad range of perspectives, Vernā will be an invaluable champion of our efforts to build a culture where all employees thrive” added Jessica Neal. “I have been a longtime fan of the inclusive and diverse programming and talent at Netflix. I was so impressed by their mission, their excellence, and decision to take their inclusion and diversity efforts to a higher level. I excited and look forward to collaborating all across Netflix to establish bold innovative frameworks and practices that will attract, and sustain high performing diverse teams” says Myers. For more information, check out the official Netflix blog post. How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’  
Read more
  • 0
  • 0
  • 2466

article-image-announcing-oracle-solaris-11-4-consistent-secure-and-easy-to-use-platform
Fatema Patrawala
30 Aug 2018
3 min read
Save for later

Announcing Oracle Solaris 11.4: Consistent, secure and easy to use platform

Fatema Patrawala
30 Aug 2018
3 min read
Oracle announced the release of Oracle Solaris 11.4, a trusted business platform. Oracle Solaris gives consistent compatibility, is secure and simple to use platform. The version 11.4 is the first and the only operating system with a complete UNIX® V7 certification. Check out these facts about Oracle Solaris 11.4: The team worked on 175 development builds to get Oracle Solaris 11.4 It has been tested for more than 30 million machine hours 50 customers have put Oracle Solaris into production More than 3000 applications are certified to run on it New features in Oracle Solaris 11.4 Consistently compatible Major reason for companies and organizations behind choosing Oracle Solaris is its continued consistency. The Oracle Solaris Application Compatibility Guarantee program guarantees that it will work seamlessly on previous releases of Oracle Solaris. Additionally you can migrate Oracle 10 workloads to Oracle 11 with enhanced migration tools and documentation available for modern hardware. Simple Interface A new feature, Observability Tools System Web Interface brings together several key observability technologies. It includes the new StatsStore data, audit events and FMA events, into a centralized, customizable browser-based interface, that allows you to see the current and past system behavior at a glance. It will also allow you to add your own data for collection and customize the interface as you like. The Service Management Framework has been enhanced to allow you to automatically monitor and restart critical applications and services. Oracle Solaris Zones are now updated and the applications inside it can be run simply with the ability to evacuate a system of all of its Zones with just one command. With Oracle Solaris 11.4, you can now build intra-Zone dependencies and have the dependent Zones boot in the correct order. This will enable you to automatically boot and restart complex application stacks in the correct order. Safe and Secure Oracle Solaris 11.4 will give more security capabilities with multi-node compliance to  stay secure and compliant. You will be able to setup compliance to either push a compliance assessment to all systems with a single command and review the results in a single report. Alternatively, you can setup your systems to regularly generate their compliance reports and push them to a central server which can be viewed via a single report. Trusted path services are added in Oracle Solaris 11.4, to create your own services like Puppet and Chef, that can be placed on the trusted path. It will allow you to make the requisite changes while keeping the system/zone immutable and protected. With update to Oracle Solaris the team released a new version of Oracle Solaris Cluster 4.4. To know more about this release and to download Oracle Solaris 11.4 visit the Oracle Technology Network page. Oracle releases GraphPipe: An open source tool that standardizes machine learning model deployment Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract Oracle makes its Blockchain cloud service generally available
Read more
  • 0
  • 0
  • 3838
Visually different images

article-image-how-microsoft-365-is-now-using-artificial-intelligence-to-smartly-enrich-your-content-in-onedrive-and-sharepoint
Melisha Dsouza
29 Aug 2018
4 min read
Save for later

How Microsoft 365 is now using artificial intelligence to smartly enrich your content in OneDrive and SharePoint

Melisha Dsouza
29 Aug 2018
4 min read
Microsoft is now using the power of artificial intelligence in OneDrive and SharePoint. With this initiative, users can be more productive, make more informed decisions, keep more secure and better search capabilities. The increasing pressure on employees to be more productive in less time is challenging, especially taking into account the ever-increasing digital content. Microsoft aims to ease some of this pressure by providing smart solutions to store content. Smart features provided by Microsoft 365 Be Productive #1 Video and audio transcription Beginning later this year, Microsoft aims to introduce automated transcription services to be natively available for video and audio files in OneDrive and SharePoint. This will use the same AI technology available in Microsoft Stream. A full transcript will be shown directly in the viewer alongside a video or while listening to an audio file. Thus improving accessibility and search. This will further help users collaborate with others to improve productivity and quality of work. The video once made can be uploaded and published to Microsoft Stream. AI comes into the picture by providing in-video face detection and automatic captions. Source: Microsoft.com #2 Searching audio, video, and images As announced last September, Microsoft has unlocked the value of photos and images stored in OneDrive and SharePoint. Searching images will now be a cake walk as the native, secure AI will determine the location where the photos were taken, recognize objects, and extract text in photos. Video and audio files also become fully searchable owing to the transcription services mentioned earlier.   Source: Microsoft.com #3 Intelligent files recommendations The plans are to introduce a new files view to OneDrive and the Office.com home page to recommend relevant files to a user, sometime later in 2018. The intelligence of Microsoft Graph will access how a user works, who the user works with, and activity on content shared with the user across Microsoft 365. This information while collaborating on content in OneDrive and SharePoint will be used to suggest files to the user. The Tap feature in Word 2016 and Outlook 2016 intelligently recommends content stored in OneDrive and SharePoint by accessing the context of what the user is working on. Source: Microsoft.com Making informed decisions has never been easier The innovative AI used in OneDrive and SharePoint helps users make informed decisions while working with content. Smart features like File insights, Intelligent sharing, and Data insights are here to provide you with stats and facts to make life easier. Let’s suppose you have an important meeting at hand. File Insights helps viewers with an  ‘Inside look’ i.e. an important information at a glance to prep for the meeting. Source: Microsoft.com Intelligent sharing helps employees share relevant content like documents and presentations with meeting attendees. Source: Microsoft.com Finally, Data Insights will use information provided by cognitive services to set up custom workflows to organize images, trigger notifications, or invoke more extensive business processes directly in OneDrive and SharePoint with deep integration to Microsoft Flow.   Source: microsoft.com   Security Enhancements AI-powered OneDrive and SharePoint will help in securing content and ward off malicious attacks. ‘OneDrive files restore’ integrated with ‘Windows Defender Antivirus’ protects users from ransomware attacks by identifying breaches and guides them through remediation and file recovery. Users will be able to leverage the text extracted from photos and audio/video transcriptions by applying ‘Native data loss prevention (DLP)’ policies to automatically protect content thereby adhering to Intelligent compliance. Many Fortune 500 customers have already started supporting Microsoft’s bold vision to improvise content collaboration and are moving their content to OneDrive and SharePoint. Take a look at the official page for detailed information on Microsoft 365’s smart new features. Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy Microsoft claims it halted Russian spearphishing cyberattacks Microsoft’s .NET Core 2.1 now powers Bing.com    
Read more
  • 0
  • 0
  • 2817

article-image-a-new-conservative-facebook-employee-group-to-protest-intolerant-liberal-policies
Sugandha Lahoti
29 Aug 2018
2 min read
Save for later

A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies

Sugandha Lahoti
29 Aug 2018
2 min read
Over hundred conservative Facebook employees have formed an online group to protest against the company’s “intolerant” liberal culture. First reported by  the New York Times, these employees have formed an internal online group “FB’ers for Political Diversity”,  which is a space for ideological diversity within the company. Brian Amerige, a senior Facebook engineer wrote in the group, “We are a political monoculture that’s intolerant of different views. We claim to welcome all perspectives, but are quick to attack — often in mobs — anyone who presents a view that appears to be in opposition to left-leaning ideology.” Said to follow the principles of Ayn Rand, Mr. Amerige, started working at Facebook in 2012. He posted a 527-word memo about political diversity at Facebook on his personal website on 20 Aug 2018. He also proposed that Facebook employees debate their political ideas in the new group to better equip the company to host a variety of viewpoints on its platform. This activity comes as quite a surprise in Facebook’s largely liberal workplace culture - a rare sign of an organized disagreement. The last few years, Facebook witnessed many disturbing events, from the spread of misinformation by Russians, the mishandling of users’ data, and the ban of Alex Jones. Critics from the group consider these moves to be a sign that Facebook harbors an anti-conservative bias. This new group has received both praise and backlash from Facebook employees. Some say that its online posts were offensive to minorities. Per the New York Times, one engineer (name undisclosed), said “several people had lodged complaints with their managers about FB’ers for Political Diversity and were told that it had not broken any company rules.” However, some Facebook employees considered the group to be constructive and inclusive of different political viewpoints. Facebook is yet to comment on their employees’ political ideology. With Sheryl Sandberg, Facebook’s chief operating officer, scheduled to testify at a Senate hearing about social media manipulation in elections, this protest adds a one more dimension to the social complexities that Facebook often finds itself these days. For more details, read the original post on the New York Times. Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty. Facebook bans another quiz app and suspends 400 more due to concerns of data misuse. Facebook is reportedly rating users on how trustworthy they are at flagging fake news.
Read more
  • 0
  • 0
  • 2362

article-image-big-data-as-a-service-bdaas-solutions-comparing-iaas-paas-and-saas
Guest Contributor
28 Aug 2018
8 min read
Save for later

Big data as a service (BDaaS) solutions: comparing IaaS, PaaS and SaaS

Guest Contributor
28 Aug 2018
8 min read
What is Big Data as a Service (BDaaS)? Thanks to the increased adoption of cloud infrastructures, processing, storing, and analyzing huge amounts of data has never been easier. The big data revolution may have already happened, but it’s Big Data as a service, or BDaas, that’s making it a reality for many businesses and organizations. Essentially, BDaas is any service that involves managing or running big data on the cloud. The advantages of BDaas There are many advantages to using a BDaaS solution. It makes many of the aspects that managing a big data infrastructure yourself so much easier. One of the biggest advantages is that it makes managing large quantities of data possible for medium-sized businesses. Not only can it be technically and physically challenging, it can also be expensive. With BDaaS solutions that run in the cloud, companies don’t need to stump up cash up front, and operational expenses on hardware can be kept to a minimum. With cloud computing, your infrastructure requirements are fixed at a monthly or annual cost. However, it’s not just about storage and cos. BDaaS solutions sometimes offer in-built solutions for artificial intelligence and analytics, which means you can accomplish some pretty impressive results without having to have a huge team of data analysts, scientists and architects around you. The different models of BDaaS There are three different BDaaS models. These closely align with the 3 models of cloud infrastructure: IaaS, PaaS, and SaaS. Big Data Infrastructure as a Service (IaaS) – Basic data services from a cloud service provider. Big Data Platform as a Service (PaaS) – Offerings of an all-round Big Data stack like those provided by Amazon S3, EMR or RedShift. This excludes ETL and BI. Big Data Software as a Service (SaaS) – A complete Big Data stack within a single tool. How does the Big Data IaaS Model work? A good example of the IaaS model is Amazon’s AWS IaaS architecture, which combines S3 and EC2. Here, S3 acts as a data lake that can store infinite amounts of structured as well as unstructured data. EC2 acts a compute layer that can be used to implement a data service of your choice and connects to the S3 data. For the data layer you have the option of choosing from among: Hadoop – The Hadoop ecosystem can be run on an EC2 instance giving you complete control NoSQL Databases – These include MongoDB or Cassandra Relational Databases – These include PostgreSQL or MySQL For the compute layer, you can choose from among: Self-built ETL scripts that run on EC2 instances Commercial ETL tools that can run on Amazon’s infrastructure and use S3 Open source processing tools that run on AWS instances, like Kafka How does the Big Data PaaS Model work? A standard Hadoop cloud-based Big Data Infrastructure on Amazon contains the following: Data Ingestion – Logs file data from any data source Amazon S3 Data Storage Layer Amazon EMR – A scalable set of instances that run Map/Reduce against the S3 data. Amazon RDS – A hosted MySQL database that stores the results from Map/Reduce computations. Analytics and Visualization – Using an in-house BI tool. A similar set up can be replicated using Microsoft’s Azure HDInsight. The data ingestion can be made easier with Azure Data Factory’s copy data tool. Apart from that, Azure offers several storage options like Data lake storage and Blob storage that you can use to store results from the computations. How does the Big Data SaaS model work? A fully hosted Big Data stack complete that includes everything from data storage to data visualization contains the following: Data Layer – Data needs to be pulled into a basic SQL database. An automated data warehouse does this efficiently Integration Layer – Pulls the data from the SQL database into a flexible modeling layer Processing Layer – Prepares the data based on the custom business requirements and logic provided by the user Analytics and BI Layer – Fully featured BI abilities which include visualizations, dashboards, and charts, etc. Azure Data Warehouse and AWS Redshift are the popular SaaS options that offer a complete data warehouse solution in the cloud. Their stack integrates all the four layers and is designed to be highly scalable. Google’s BigQuery is another contender that’s great for generating meaningful insights at an unmatched price-performance. Choosing the right BDaaS provider It sounds obvious, but choosing the right BDaaS provider is ultimately all about finding the solution that best suits your needs. There are a number of important factors to consider, such as workload, performance, and cost, each of which will have varying degrees of importance for you. criteria behind the classification include workload, performance and budget requirements. Here are 3 ways you might approach a BDaaS solution:Core BDaaS Core BDaaS uses a minimal platform like Hadoop with YARN and HDFS and other services like Hive. This service has gained popularity among companies which use this for any irregular workloads or as part of their larger infrastructure. They might not be as performance intensive as the other two categories. A prime example would be Elastic MapReduce or EMR provided by AWS. This integrates freely with NoSQL store, S3 Storage, DynamoDB and similar services. Given its generic nature, EMR allows a company to combine it with other services which can result in simple data pipelines to a complete infrastructure. Performance BDaaS Performance BDaaS assists businesses that are already employing a cluster-computing framework like Hadoop to further optimize their infrastructure as well as the cluster performance. Performance BDaaS is a good fit for companies that are rapidly expanding and do not wish to be burdened by having to build a data architecture and a SaaS layer. The benefit of outsourcing the infrastructure and platform is that companies can focus on specific processes that add value instead of concentrating on complicated Big Data related infrastructure. For instance, there are many third-party solutions built on top of Amazon or Azure stack that let you outsource your infrastructure and platform requirements to them. Feature BDaaS If your business is in need of additional features that may not be within the scope of Hadoop, Feature BDaaS may be the way forward. Feature BDaaS focuses on productivity as well as abstraction. It is designed to enable users to be up and using Big Data quickly and efficiently. Feature BDaaS combines both PaaS and SaaS layers. This includes web/API interfaces, and database adapters that offer a layer of abstraction from the underlying details. Businesses don’t have to spend resources and manpower setting up the cloud infrastructure. Instead, they can rely on third-party vendors like Qubole and Altiscale that are designed to set it up and running on AWS, Azure or cloud vendor of choice quickly and efficiently. Additional Tips for Choosing a Provider When evaluating a BDaaS provider for your business, cost reduction and scalability are important factors. Here are a few tips that should help you choose the right provider. Low or Zero Startup Costs – A number of BDaaS providers offer a free trial period. Therefore, theoretically, you can start seeing results before you even commit a dollar. Scalable – Growth in scale is in the very nature of a Big Data project. The solution should be easy and affordable to scale, especially in terms of storage and processing resources. Industry Footprint – It is a good idea to choose a BDaaS provider that already has an experience in your industry. This is doubly important if you are also using them for consultancy and project planning requirements. Real-Time Analysis and Feedback – The most successful Big Data projects today are those that can provide almost immediate analysis and feedback. This helps businesses to take remedial action instantly instead of working off of historical data. Managed or Self-Service – Most BDaaS providers today provide a mix of both managed as well as self-service models based on the company’s needs. It is common to find a host of technical staff working in the background to provide the client with services as needed. Conclusion The value of big data is not in the data itself, but in the insights that can be drawn after processing it and running it through robust analytics. This can help to guide and define your decision making for the future. A quick tip with regards to using Big Data: keep it small at the initial stages. This ensures the data can be checked for accuracy and the metrics derived from them are right. Once confirmed, you can go ahead with more complex and larger data projects. Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia. Gilad is a 3-time winner of international technical communication awards, including the STC Trans-European Merit Award and the STC Silicon Valley Award of Excellence. Over the past 7 years Gilad has headed Agile SEO, which performs strategic search marketing for leading technology brands. Together with his team, Gilad has done market research, developer relations and content strategy in 39 technology markets, lending him a broad perspective on trends, approaches and ecosystems across the tech industry. Common big data design patterns Hortonworks partner with Google Cloud to enhance their Big Data strategy Top 5 programming languages for crunching Big Data effectively Getting to know different Big data Characteristics  
Read more
  • 0
  • 0
  • 10153
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-amd-rocm-gpus-now-support-tensorflow-v1-8-a-major-milestone-for-amds-deep-learning-plans
Prasad Ramesh
28 Aug 2018
2 min read
Save for later

AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans

Prasad Ramesh
28 Aug 2018
2 min read
AMD has announced the support for TensorFlow v1.8 for their ROCm-enabled GPUs. This includes the Radeon Instinct MI25. ROCm stands for Radeon Open Compute and it is an open-source Hyperscale-class (HPC) platform for GPUs. The platform is programming-language independent. This is a major milestone in AMD’s efforts towards accelerating deep learning. ROCm, the Radeon Open Ecosystem is AMD’s open-source software foundation for GPU computing on Linux. Mayank Daga, Director, Deep Learning Software, AMD stated: “Our TensorFlow implementation leverages MIOpen, a library of highly optimized GPU routines for deep learning.” There is a pre-built whl package made available for a simple install similar to the installation of generic TensorFlow in Linux. They also provide a pre-built Docker image for fast installation. AMD is also working towards upstreaming all the ROCm-specific enhancements to the TensorFlow master repository in addition to supporting TensorFlow v1.8. While they work towards fully upstreaming the enhancements, AMD will be releasing and maintaining future ROCm-enabled TensorFlow versions, like v1.10. In the post, Daga stated, “We believe the future of deep learning optimization, portability, and scalability has its roots in domain-specific compilers. We are motivated by the early results of XLA, and are also working towards enabling and optimizing XLA for AMD GPUs.” Current CPUs which support PCIe Gen3 + PCIe Atomics are: AMD Ryzen CPUs AMD EPYC CPUs Intel Xeon E7 V3 or newer CPUs Intel Xeon E5 v3 or newer CPUs Intel Xeon E3 v3 or newer CPUs Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer). The installation is simple, First, you’ll need the open-source ROCm stack. Then, install the rocm library needs to be installed via APT: sudo apt update sudo apt install rocm-libs miopen-hip cxlactivitylogger And finally, you install TensorFlow itself (via AMD’s pre-built whl package): sudo apt install wget python3-pip wget http://repo.radeon.com/rocm/misc/tensorflow/tensorflow-1.8.0-cp35-cp35m-manylinux1_x86_64.whl pip3 install ./tensorflow-1.8.0-cp35-cp35m-manylinux1_x86_64.whl For more details on how to get started, visit the GitHub repository. There are also examples on image recognition, audio recognition, and multi-gpu training on ImageNet in the GPUOpen website. Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU” AMD open sources V-EZ, the Vulkan wrapper library Sugar operating system: A new OS to enhance GPU acceleration security in web apps
Read more
  • 0
  • 0
  • 5380

article-image-dopamine-a-tensorflow-based-framework-for-flexible-and-reproducible-reinforcement-learning-research-by-google
Savia Lobo
28 Aug 2018
3 min read
Save for later

Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google

Savia Lobo
28 Aug 2018
3 min read
Yesterday, Google introduced a new Tensorflow-based framework named Dopamine, which aims to provide flexibility, stability, and reproducibility for both new and experienced RL researchers. This release also includes a set of colabs that clarify how to use the Dopamine framework. Dopamine is inspired by one of the main components in reward-motivated behavior in the brain. It also reflects a strong historical connection between neuroscience and reinforcement learning research. Its main aim is to enable a speculative research that drives radical discoveries. Dopamine framework feature highlights Ease of Use The two key considerations in Dopamine’s design are its clarity and simplicity. Its code is compact (about 15 Python files) and is well-documented. This is achieved by focusing on the Arcade Learning Environment (a mature, well-understood benchmark), and four value-based agents: DQN, C51, A carefully curated simplified variant of the Rainbow agent, and The Implicit Quantile Network agent, which was presented last month at the International Conference on Machine Learning (ICML). Reproducibility Google has provided the Dopamine code with full test coverage. These tests also serve as an additional form of documentation. Dopamine follows the recommendations given by Machado et al. (2018) on standardizing empirical evaluation with the Arcade Learning Environment. Benchmarking It is important for new researchers to be able to quickly benchmark their ideas against established methods. Following this, Google has provided the full training data of the four provided agents, across the 60 games supported by the Arcade Learning Environment. They have also provided a website where one can quickly visualize the training runs for all provided agents on all 60 games. Given below is a snapshot showcasing the training runs for the 4 agents on Seaquest, one of the Atari 2600 games supported by the Arcade Learning Environment. The x-axis represents iterations, where each iteration is 1 million game frames (4.5 hours of real-time play); the y-axis is the average score obtained per play. The shaded areas show confidence intervals from 5 independent runs. The Google community aims to empower researchers to try out new ideas, both incremental and radical with Dopamine ’s flexibility and ease-of-use. It is actively being used in Google’s research, giving them the flexibility to iterate quickly over many ideas. To know more about Dopamine in detail visit the Google AI blog. You can also check out its GitHubrepo. Build your first Reinforcement learning agent in Keras [Tutorial] Reinforcement learning model optimizes brain cancer treatment, reduces dosing cycles and improves patient quality of life OpenAI builds a reinforcement learning based system giving robots hhuman-likedexterity
Read more
  • 0
  • 0
  • 6385

article-image-toyota-to-invest-500m-for-autonomous-car-deal-with-uber
Melisha Dsouza
28 Aug 2018
3 min read
Save for later

Toyota to invest $500m for autonomous car deal with Uber

Melisha Dsouza
28 Aug 2018
3 min read
The Japanese automaker giant, Toyota Motor Corp. will join hands with Uber to work collectively on autonomous autos. Toyota will make an investment of about $500 million and will value Uber at $72 billion to get self-driving cars on the road. Toyota aims to improve security and decrease transportation prices with this initiative. As for Uber, it's a chance to redeem itself in the budding autonomous transportation sector. As a part of the alliance, Toyota will manufacture Sienna vehicles, which will be equipped with Uber’s self-driving technology, and another company will operate the fleet, said a source familiar with the project. The third partner has yet to be identified. Consumers can expect "mass-production" of self-driving vehicles that would be deployed on Uber's ride sharing network. Source: BBC.com After Uber withdrew its self-driving cars owing to the autonomous Uber SUV that killed a pedestrian in a fatal crash in Tempe, Arizona, in March, the investment is a ray of hope for the company and its users. With this, Uber consumers’ growing apprehension that Uber is pulling out of the self-driving car space will be finally put to rest. As for the Uber’s investors, this collaboration will come as a relief especially after it was reported earlier this month that Uber was sinking around $1m-$2m into its autonomy work every single day thanks to the fatal crash and the expensive lawsuit that followed. This $500 billion project is expected to be piloted in 2021. The potential of self-driving cars to power car-sharing services represents a major challenge in an industry dominated by individual car ownership. For Toyota, it presents an opportunity to reinvent itself from a car maker to a mobility platform.   “This agreement and investment marks an important milestone in our transformation to a mobility company as we help provide a path for safe and secure expansion of mobility services like ride-sharing.” -Shigeki Tomoyama, executive vice president of Toyota Motor Corporation Toyota has been lagging behind in the scene of self driving cars, while Uber's troubled self-driving car efforts are in desperate need of external help. It would, therefore, be interesting to see how this joint collaboration works in favour of both, Toyota and Uber. For more details on this story, head over to Fortune’s coverage of this news. Apple self-driving cars are back! VoxelNet may drive the autonomous vehicles MIT’s Duckietown Kickstarter project aims to make learning how to program self-driving cars affordable Tesla is building its own AI hardware for self-driving cars  
Read more
  • 0
  • 0
  • 1184

article-image-nvidia-announces-pre-orders-for-the-jetson-xavier-developer-kit-an-ai-chip-for-autonomous-machines-at-2499
Prasad Ramesh
28 Aug 2018
3 min read
Save for later

NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499

Prasad Ramesh
28 Aug 2018
3 min read
NVIDIA Jetson Xavier is an AI computer designed to be used in autonomous machines. It delivers the performance of a GPU workstation in an embedded module while consuming power under 30W. It can also operate at 10W and 15W. The Jetson Xavier is supported by NVIDIA’s SDKs like JetPack and DeepStream. It also supports popular libraries like CUDA, cuDNN, and TensorRT. Per NVIDIA, Xavier has 20 times the performance and 10 times the energy efficiency of its predecessor, the NVIDIA Jetson TX2. Everything needed to get started with Nvidia Jetson Xavier is present in the box, including the power supply and cables. The Jetson Xavier is designed for robots, drones and other autonomous machines and is also suitable for smart city applications. An important use case NVIDIA considered while designing the chip was robot prototyping, that meant making it as small as possible while delivering the maximum performance and options for I/O. The module itself without the thermal solution is just about the size of a small notebook. You can run a total of three monitors at once with the two USB 3.1 type C ports and the HDMI port. The chip consists of six processing units. It includes a 512-core Nvidia Volta Tensor Core GPU and an eight-core Carmel Arm64 CPU. The chip capable of 30 trillion operations per second. The specifications of the NVIDIA Jetson Xavier are: GPU 512-core Volta GPU with Tensor Cores DL Accelerator (2x) NVDLA Engines CPU 8-core ARMv8.2 64-bit CPU, 8MB L2 + 4MB L3 Memory 16GB 256-bit LPDDR4x | 137 GB/s Storage 32GB eMMC 5.1 Vision Accelerator 7-way VLIW Processor Video Encode (2x) 4Kp60 | HEVC Video Decode (2x) 4Kp60 | 12-bit support Camera 16x CSI-2 Lanes (40 Gbps in D-PHY V1.2 or 109 Gbps in CPHY v1.1) 8x SLVS-EC lanes (up to 18.4 Gbps) Up to 16 simultaneous cameras PCIe 5x PCIe gen4 (16GT/s) controllers | 1x8, 1x4, 1x2, 2x1 Root port and endpoint Mechanical 100mm x 87mm with 16mm Z-height (699-pin board-to-board connector)   The Xavier is available for preorder for $2,499, but if you are a member of the NVIDIA Developer Program you can get your first kit at a special price of $1,299. For more details, visit the NVIDIA website. NVIDIA open sources its material definition language, MDL SDK NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source
Read more
  • 0
  • 0
  • 2631
article-image-jepsen-reports-23-issues-in-dgraph-including-multiple-deadlocks-and-crashes-in-the-cluster-snapshot-isolation-violations-among-others
Bhagyashree R
27 Aug 2018
4 min read
Save for later

Jepsen reports 23 issues in Dgraph including multiple deadlocks and crashes in the cluster, snapshot isolation violations among others

Bhagyashree R
27 Aug 2018
4 min read
In the distributed systems verification of Dgraph 1.0.2 through 1.0.6, Jepsen has found 23 issues including multiple deadlocks and crashes in the cluster, duplicate upserted records, snapshot isolation violations, records with missing fields, and in some cases, the loss of all but one inserted record. Dgraph is an open source, fast, distributed graph database which uses Raft for per-shard replication and a custom transactional protocol, based on Omid, Reloaded, for snapshot-isolated cross-shard transactions. Dgraph has a custom transaction system to provide transactional isolation across different Raft groups. Storage nodes, called Alpha, are controlled by a supervisory system, called Zero. Zero nodes form a single Raft cluster, which organizes Alpha nodes into shards called groups. Each group runs an independent Raft cluster. Jepsen test suite design Jepsen is a framework to analyze distributed systems under stress and verify that the safety properties of a distributed system hold up, given concurrency, non-determinism, and partial failure. It is an effort to improve the safety of distributed databases, queues, consensus systems, and more. To verify safety properties of Dgraph, a suite of Jepsen tests was designed using a five node cluster with replication factor three. Alpha nodes were organized into two groups: one with three replicas, and one with two. Every node ran an instance of both Zero and Alpha. Many operations were tested, out of which some are listed here: Set: Inserting a sequence of unique numbers into Dgraph, then querying for all extant values. Finally, check if every successfully acknowledged insert is present in the final read. Upsert: An upsert is a common database operation in which a record is created if and only if an equivalent record does not already exist. Delete: In the delete test, concurrent attempts were made to delete any records for an indexed value. Since deleting can only lower the number of records, not increase it, it was expected to never observe more than one record at any given time. Bank: The bank test stresses several invariants provided by snapshot isolation. A set of bank accounts were created, each with three attributes: type: It is always account. We use this to query for all accounts. key: It is an integer which identifies that account. amount: It is the amount of money in that account. Issues found in the Dgraph Here are some of the issues found by the test: Cluster join issues: Race conditions were discovered in the Dgraph’s cluster join procedure. Duplicate upserts: In the bank test, it was discovered that test initialization process concurrently upserts a single initial account resulting in dozens of copies of that account record, rather than one. Delete anomalies: With a mix of upserts, deletes, and reads of single records identified by an indexed field key, several unusual behaviors were found. Unusual results were found like values disappeared due to deletion, get stuck in a dangling state, then reappear as full records. Read skew: With a more reliable cluster join process, a read skew anomaly in the bank test was discovered. Lost inserts with network partitions: In pure insert workloads, Dgraph could lose acknowledged writes during network partitions. While performing set tests, which insert unique integer values and attempt to perform a final read, huge number of acknowledged values could be lost. Write loss on node crashes: When Alpha nodes crash and restart, the set test revealed that small windows of successfully acknowledged writes could be lost right around the time the process(es) crashed. Dgraph also constructed records with missing values. Unavailability after crashes: Despite every Alpha and Zero node running, and with total network connectivity, nodes could return timeouts for all requests. Read skew in healthy clusters: A bank test revealed failures without any migration, or even any failures at all. Dgraph could still return incorrect account totals, or records with missing values. The identified safety issues were mostly associated with process crashes, restarts, and predicate migration. Out of 23 issues, 4 still remain unresolved, including the corruption of data in healthy clusters. This analysis was funded by Dgraph and Jepsen has documented the full report on their official website. 2018 is the year of graph databases. Here’s why. MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB going relational with 4.0 release
Read more
  • 0
  • 0
  • 1954

article-image-openai-five-loses-against-humans-in-dota-2-at-the-international-2018
Amey Varangaonkar
27 Aug 2018
3 min read
Save for later

OpenAI Five loses against humans in Dota 2 at The International 2018

Amey Varangaonkar
27 Aug 2018
3 min read
Looks like OpenAI’s intelligent game-playing bots need to get a little more street smart before they can beat the world’s best. Played as a promotional side-event in The International - the annual Dota 2 tournament, OpenAI Five were beaten by a team of top human professional players in the first two games of the Best of Three contest. Both games were intense and lasted for approximately an hour, but the human teams emerged victorious quite comfortably. OpenAI Five, as we know, are 5 artificially intelligent bots developed by OpenAI, a research institute co-founded by Tesla CEO, Elon Musk, to develop and research human-level artificial intelligence. These bots are trained specifically to play Dota 2 game against top human professionals. While OpenAI five racked up more kills in the game than the human teams paiN Gaming and Big God, it lacked a cohesive strategy and wasted many opportunities to gather and utilize resources in the game efficiently, which is often the difference between a win and a loss. This loss highlights the fact that while the bots are on the right track, more improvement is needed in the manner they adjust to their surroundings and make tactical decisions on the go. Researcher at the University of Falmouth, UK, Mike Cook, agrees - his criticism being that the bots lacked decision-making at the macro-level while having their own moments of magic in the game. [embed]https://twitter.com/mtrc/status/1032430538039148544[/embed] Greg Brockman, CTO and co-founder of OpenAI, meanwhile, was not worried about this loss, citing that it is the defeats that will make OpenAI Five better and more efficient. He was of the opinion that the AI was designed to learn and adapt by learning from the experiences first, before being able to beat the human players. According to Greg, the OpenAI Five is very much still a work in progress project. [embed]https://twitter.com/gdb/status/1032830230103244800[/embed] The researchers at OpenAI are hopeful that the OpenAI Five will improve from this valuable learning experience and give a much tougher fight in the next edition of the tournament, since there won’t be a third game this year. As things stand, though, it’s pretty clear that the human players aren’t going to be replaced by the AI bots anytime soon. See Also: AI beats human again – this time in a team-based strategy game Build your first Reinforcement learning agent in Keras A new Stanford artificial intelligence camera uses a hybrid optical-electronic CNN for rapid decision making
Read more
  • 0
  • 0
  • 3783

article-image-dowhy-microsofts-new-python-library-for-causal-inference
Natasha Mathur
24 Aug 2018
3 min read
Save for later

DoWhy: Microsoft’s new python library for causal inference

Natasha Mathur
24 Aug 2018
3 min read
Microsoft came out with a library, named DoWhy, earlier this week, for promoting widespread use of causal inference. Causal inference refers to the process of drawing a conclusion from a causal connection which is based on the conditions of the occurrence of an effect. Simply put, causal inference attempts to find or guess why something happened. "DoWhy" is a Python library which is aimed to spark causal thinking and analysis. It provides a unified interface for causal inference methods. There’s also automatic testing of multiple assumptions making the inference accessible to non-experts. According to Microsoft, “Our motivation for creating DoWhy comes from our experiences in causal inference studies -- ranging from estimating the impact of a recommender system to predicting likely outcomes given a life event -- we found ourselves repeating the common steps of finding the right identification strategy, devising the most suitable estimator, and conducting robustness checks, all from scratch”. DoWhy highlights the critical assumptions lying beneath causal inference analysis. It is designed using four major principles: Model a causal inference problem using assumptions. Identifying expression for the causal effect ("causal estimand"). Estimate the expression using statistical methods Verifying validity of the estimate How DoWhy works? First, DoWhy builds an underlying causal graphical model for every problem. This makes each causal assumption explicit. The graph does not have to be complete and you can provide a partial graph which represents prior knowledge about variables. The rest of the variables are automatically considered as potential confounders by DoWhy. Secondly, DoWhy distinguishes between identification and estimation. Identification of a causal effect refers to assumptions made about the data-generating process along with counterfactual expressions to specifying a target estimand. It uses the Bayesian graphical model framework to represent assumptions formally. Here the users can specify what they know and what they don’t know about the data-generation process. Thirdly, for estimation, there are methods based on the potential-outcomes framework including matching, stratification, and instrumental variables. Lastly, there are robustness tests along with sensitivity checks for testing or verifying the reliability of an obtained estimate. With this, you can test how the estimate changes with varying assumptions. The library is also capable of automatically checking the validity of obtained estimate depending on assumptions in the graphical model. DoWhy supports Python 3+ and requires packages such as numpy, scipy, scikit-learn, pandas, pygraphviz (for causal graphs plotting), networkx (for causal graphs analysis), matplotlib (for general plotting), and sympy (for symbolic expressions rendering). Microsoft plans on adding more features to the DoWhy library. This includes improved estimation support, sensitivity methods and interoperability with available estimation software. For more information, check out the official DoWhy documentation. Say hello to FASTER: a new key-value store for large state management by Microsoft NIPS 2017 Special: A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh Microsoft launches a free version of its Teams app to take Slack head on  
Read more
  • 0
  • 0
  • 5004
article-image-say-hello-to-ibm-rxn-a-free-ai-tool-in-ibm-cloud-for-predicting-chemical-reactions
Natasha Mathur
24 Aug 2018
3 min read
Save for later

Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions

Natasha Mathur
24 Aug 2018
3 min read
Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions Earlier this week, IBM launched an AI tool called IBM RXN in IBM cloud at the American Chemistry Society, Boston, for predicting chemical reactions in just seconds.  IBM RXN is an advanced AI model which is useful in daily research activities and experiments. IBM Research IBM presented a web-based app last year at the NIPS 2017 Conference, which is capable of relating organic chemistry to a language. It also applies state-of-the-art neural machine translation methods which take care of converting designing materials to generating products leveraging sequence-to-sequence (seq2seq) models. IBM RXN for Chemistry uses a system known as a simplified molecular-input line-entry system or SMILES. SMILES is used to represent a molecule as a sequence of characters. The model was trained using a combination of reaction datasets, equivalent to a total of 2 million reactions. SMILES in IBM RXN IBM RXN comprises of features such as Ketcher editor, pre-configured libraries, and challenge mode. Ketcher is a web-based chemical structure editor which is designed for chemists, lab scientists, and technicians. It involves selecting, modifying, and erasing the connected, and unconnected atom bonds with the help of a selection tool or shift key. There’s a cleanup tool which checks bond lengths, angles and spatial arrangement of atoms. It is also capable of checking the stereochemistry and structure layout with its advanced features. It is a simple data-driven tool which is trained without querying a database or any additional external information. Additionally, users can build projects and share them with friends or colleagues. There are Pre Configured libraries of molecules which enable adding reactants and reagents to your Ketcher board in just a few clicks. It also provides access to the most common molecules in organic chemistry via the installation of a library to your molecule set. You can also upload molecules to customize libraries. Enhancing the libraries with your own reaction outcomes or with molecules drawn on the Ketcher board is also possible. Finally, there is a challenge mode which puts your Organic Chemistry knowledge to test and helps with Organic grade preparation for class exams. IBM RXN is a completely free tool and available in the IBM cloud. For more information, check out the official IBM blog post. IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware Four IBM facial recognition patents in 2018, we found intriguing IBM unveils world’s fastest supercomputer with AI capabilities, Summit
Read more
  • 0
  • 0
  • 4517

article-image-15-millions-jobs-in-britain-at-stake-with-ai-robots-set-to-replace-humans-at-workforce
Natasha Mathur
23 Aug 2018
3 min read
Save for later

15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce

Natasha Mathur
23 Aug 2018
3 min read
Earlier this week, the Bank of England’s chief economist, Andy Haldane, gave a warning that the UK needs a skills revolution as up to 15 million jobs in Britain are at stake. This is apparently due to a “third machine age” where Artificial Intelligence is making a huge number of jobs that were previously the preserve of humans outdated. Haldane says that this potential "Fourth Industrial Revolution" could cause disruptions on a "much greater scale" than the damage experienced during the first three Industrial Revolutions. This is because the first three industrial revolutions were mainly about machines replacing humans doing manual tasks.  But, the fourth Industrial revolution will be different. As Haldane told the BBC Radio 4’s Today programme, “the 20th-century machines have substituted not just for manual human tasks, but cognitive ones too -- human skills machines could reproduce, at lower cost, has both widened and deepened”. With robots becoming more intelligent, there will be deeper degrees of hollowing-out of jobs in this revolution than in the past. The Bank of England has classified jobs into three categories –jobs with a high (greater than 66%), medium (33-66%) and low (less than 33%) chances of automation. Administrative, clerical and production jobs are at the highest risk of getting replaced by Robots. Whereas, jobs focussing on human interaction, face-to-face conversation, and negotiation are less likely to suffer. Probability of automation by occupation This “hollowing out” poses risk not only for low-paid jobs but will also affect the mid-level jobs. Meanwhile, the UK’s Artificial Intelligence Council Chair, Tabitha Goldstaub, mentioned that the “challenge will be ensuring that people are prepared for the cultural and economic shifts” with focus on creating "the new jobs of the future" in order to avoid mass replacement by robots. Haldane echoed Goldstaub’s sentiments and told the BBC that “we will need even greater numbers of new jobs to be created in the future if we are not to suffer this longer-term feature called technological unemployment”. Every cloud has a silver lining Although the automation of these tasks can lead to mass unemployment, Goldstaub is positive. She says “there are great opportunities ahead as well as significant challenges”. Challenge being bracing the UK workforce for the coming change. Whereas, the silver lining, according to Goldstaub is that “there is a hopeful view -- that a lot of these jobs (existing) are boring, mundane, unsafe, drudgery - there could be -- liberation from -- these jobs and a move towards a brighter world.” OpenAI builds reinforcement learning based system giving robots human like dexterity OpenAI Five bots beat a team of former pros at Dota 2 What if robots get you a job! Enter Helena, the first artificial intelligence recruiter  
Read more
  • 0
  • 0
  • 5136