Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering Machine Learning with scikit-learn

You're reading from   Mastering Machine Learning with scikit-learn Apply effective learning algorithms to real-world problems using scikit-learn

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher
ISBN-13 9781788299879
Length 254 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Gavin Hackeling Gavin Hackeling
Author Profile Icon Gavin Hackeling
Gavin Hackeling
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
1. The Fundamentals of Machine Learning FREE CHAPTER 2. Simple Linear Regression 3. Classification and Regression with k-Nearest Neighbors 4. Feature Extraction 5. From Simple Linear Regression to Multiple Linear Regression 6. From Linear Regression to Logistic Regression 7. Naive Bayes 8. Nonlinear Classification and Regression with Decision Trees 9. From Decision Trees to Random Forests and Other Ensemble Methods 10. The Perceptron 11. From the Perceptron to Support Vector Machines 12. From the Perceptron to Artificial Neural Networks 13. K-means 14. Dimensionality Reduction with Principal Component Analysis Index

Gradient descent


In the examples in this chapter, we analytically solved for the values of the model's parameters that minimize the cost function with the following equation:

Recall that X is the matrix of features for each training example. The dot product of XTX results in a square matrix with dimensions n by n, where n is equal to the number of features. The computational complexity of inverting this square matrix is nearly cubic in the number of features. While the number of features has been small in this chapter's examples, this inversion can be prohibitively costly for problems with tens of thousands of explanatory variables, which we will encounter in the following chapters. Furthermore, it is impossible to invert X if its determinant is zero. In this section, we will discuss another method for efficiently estimating the optimal values of the model's parameters called gradient descent. Note that our definition of the goodness-of-fit has not changed; we will still use gradient descent...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime
Banner background image