Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Mastering Machine Learning Algorithms

You're reading from   Mastering Machine Learning Algorithms Expert techniques to implement popular machine learning algorithms and fine-tune your models

Arrow left icon
Product type Paperback
Published in May 2018
Publisher Packt
ISBN-13 9781788621113
Length 576 pages
Edition 1st Edition
Arrow right icon
Toc

Table of Contents (22) Chapters Close

Title Page
Dedication
Packt Upsell
Contributors
Preface
1. Machine Learning Model Fundamentals FREE CHAPTER 2. Introduction to Semi-Supervised Learning 3. Graph-Based Semi-Supervised Learning 4. Bayesian Networks and Hidden Markov Models 5. EM Algorithm and Applications 6. Hebbian Learning and Self-Organizing Maps 7. Clustering Algorithms 8. Ensemble Learning 9. Neural Networks for Machine Learning 10. Advanced Neural Models 11. Autoencoders 12. Generative Adversarial Networks 13. Deep Belief Networks 14. Introduction to Reinforcement Learning 15. Advanced Policy Estimation Algorithms 1. Other Books You May Enjoy Index

Self-organizing maps


Self-organizing maps (SOMs) have been proposed by Willshaw and Von Der Malsburg (Willshaw D. J., Von Der Malsburg C., How patterned neural connections can be set up by self-organization, Proceedings of the Royal Society of London, B/194, N. 1117) to model different neurobiological phenomena observed in animals. In particular, they discovered that some areas of the brain develop structures with different areas, each of them with a high sensitivity for a specific input pattern. The process behind such a behavior is quite different from what we have discussed up until now, because it's based on competition among neural units based on a principle called winner-takes-all. During the training period, all the units are excited with the same signal, but only one will produce the highest response. This unit is automatically candidate to become the receptive basin for that specific pattern. The particular model we are going to present has been introduced by Kohonen (in the paper...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images