Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering Predictive Analytics with R, Second Edition

You're reading from   Mastering Predictive Analytics with R, Second Edition Machine learning techniques for advanced models

Arrow left icon
Product type Paperback
Published in Aug 2017
Publisher Packt
ISBN-13 9781787121393
Length 448 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
James D. Miller James D. Miller
Author Profile Icon James D. Miller
James D. Miller
Rui Miguel Forte Rui Miguel Forte
Author Profile Icon Rui Miguel Forte
Rui Miguel Forte
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Mastering Predictive Analytics with R Second Edition
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
1. Gearing Up for Predictive Modeling FREE CHAPTER 2. Tidying Data and Measuring Performance 3. Linear Regression 4. Generalized Linear Models 5. Neural Networks 6. Support Vector Machines 7. Tree-Based Methods 8. Dimensionality Reduction 9. Ensemble Methods 10. Probabilistic Graphical Models 11. Topic Modeling 12. Recommendation Systems 13. Scaling Up 14. Deep Learning Index

Regularization


Variable selection is an important process, as it tries to make models simpler to interpret, easier to train, and free of spurious associations by eliminating variables unrelated to the output. This is one possible approach to dealing with the problem of overfitting. In general, we don't expect a model to completely fit our training data; in fact, the problem of overfitting often means that it may be detrimental to our predictive model's accuracy on unseen data if we fit our training data too well. In this section on regularization, we'll study an alternative to reducing the number of variables in order to deal with overfitting. Regularization is essentially the process of introducing an intentional bias or constraint in our training procedure that prevents our coefficients from taking large values. As this is a process that tries to shrink the coefficients, the methods we'll look at are also known as shrinkage methods.

Ridge regression

When the number of parameters is very...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images