Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Artificial Intelligence By Example

You're reading from   Artificial Intelligence By Example Develop machine intelligence from scratch using real artificial intelligence use cases

Arrow left icon
Product type Paperback
Published in May 2018
Publisher Packt
ISBN-13 9781788990547
Length 490 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Denis Rothman Denis Rothman
Author Profile Icon Denis Rothman
Denis Rothman
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Title Page
Dedication
Packt Upsell
Contributors
Preface
1. Become an Adaptive Thinker FREE CHAPTER 2. Think like a Machine 3. Apply Machine Thinking to a Human Problem 4. Become an Unconventional Innovator 5. Manage the Power of Machine Learning and Deep Learning 6. Don't Get Lost in Techniques – Focus on Optimizing Your Solutions 7. When and How to Use Artificial Intelligence 8. Revolutions Designed for Some Corporations and Disruptive Innovations for Small to Large Companies 9. Getting Your Neurons to Work 10. Applying Biomimicking to Artificial Intelligence 11. Conceptual Representation Learning 12. Automated Planning and Scheduling 13. AI and the Internet of Things (IoT) 14. Optimizing Blockchains with AI 15. Cognitive NLP Chatbots 16. Improve the Emotional Intelligence Deficiencies of Chatbots 17. Quantum Computers That Think 1. Answers to the Questions Index

Chapter 1 – Become an Adaptive Thinker


1. Is reinforcement learning memoryless? (Yes | No)

The answer is yes. Reinforcement learning is memoryless. The agent calculates the next state without looking into the past. This is significantly different to humans. Humans rely heavily on memory. A CPU-based reinforcement learning system finds solutions without experience. Human intelligence merely proves that intelligence can solve a problem. No more, no less. An adaptive thinker can then imagine new forms of machine intelligence.

2. Does reinforcement learning use stochastic (random) functions? (Yes | No)

The answer is yes. In the particular Markov Decision Process model, the choices are random. In just two questions, you can see that the Bellman equation is memoryless and makes random decisions. No human reasons like that. Being an adaptive thinker is a leap of faith. You will have to leave who you were behind and begin to think in terms of equations.

3. Is the Markov Decision Process based on a rule base?

The answer is no. Human rule base experience is useless in this process. Furthermore, the Markov Decision Process provides efficient alternatives to long consulting times with future users that cannot clearly express their problem.

4. Is the Q function based on the Markov Decision Process? (Yes | No)

The answer is yes. The use of the expression "Q" appeared around the time the Bellman equation, based on the Markov Decision Process, came into fashion. It is more trendy to say you are using a Q function than to speak about Bellman, who put all of this together in 1957. The truth is that Andrey Markov was Russian and applied this method in 1913 using a dataset of 20,000 letters to predict future use of letters in a novel. He then extended that to a dataset of 100,000 letters. This means that the theory was there 100 years ago. Q fits our new world of impersonal and powerful CPUs.

5. Is mathematics essential to artificial intelligence? (Yes | No)

The answer is yes. If you master the basics of linear algebra and probability, you will be on top of all the technology that is coming. It is worth spending a few months on the subject in the evening or taking a MOOC. Otherwise, you will depend on others to explain things to you.

6. Can the Bellman-MDP process in this chapter apply to many problems? (Yes | No)

The answer is yes. You can use this for robotics, market analysis, IoT, linguistics, and scores of other problems.

7. Is it impossible for a machine learning program to create another program by itself? (Yes | No)

The answer is no. It is not impossible. It has already been done by Google with AutoML. Do not be surprised. Now that you have become an adaptive thinker and know that these systems rely on equations, not humans, you can easily understand that mathematical systems are not that difficult to reproduce.

8. Is a consultant required to enter business rules in a reinforcement learning program? (Yes | No)

The answer is no. It is only an option. Reinforcement learning in the MDP process is memoryless and random. Consultants are there to manage, explain, and train in these projects.

9. Is reinforcement learning supervised or unsupervised? (Supervised | Unsupervised)

The answer is unsupervised. The whole point is to learn from unlabeled data. If the data is labeled, then we enter the world of supervised learning; that will be searching for patterns and learning them. At this point, you can easily see you are at sea in an adventure—a memoryless, random, and unlabeled world for you to discover.

10. Can Q Learning run without a reward matrix? (Yes | No)

The answer is no. A smart developer could always find a way around this, of course. The system requires a starting point. You will see in the second chapter that it is quite a task to find the right reward matrix in real-life projects.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime
Visually different images