Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Scala for Machine Learning

You're reading from   Scala for Machine Learning Leverage Scala and Machine Learning to construct and study systems that can learn from data

Arrow left icon
Product type Paperback
Published in Dec 2014
Publisher
ISBN-13 9781783558742
Length 624 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
 R. Nicolas R. Nicolas
Author Profile Icon R. Nicolas
R. Nicolas
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Scala for Machine Learning
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Getting Started FREE CHAPTER 2. Hello World! 3. Data Preprocessing 4. Unsupervised Learning 5. Naïve Bayes Classifiers 6. Regression and Regularization 7. Sequential Data Models 8. Kernel Models and Support Vector Machines 9. Artificial Neural Networks 10. Genetic Algorithms 11. Reinforcement Learning 12. Scalable Frameworks Basic Concepts Index

Performance consideration


The time complexity for decoding and evaluating canonical forms of the hidden Markov model for N states and T observations is O(N2T). The training of the HMM using the Baum-Welch algorithm is O(N2TM), where M is the number of iterations.

There are several options to improve the performance of the HMM:

  • Avoid unnecessary multiplication by 0 in the emission probabilities matrix by either using sparse matrices or tracking the null entries.

  • Train the HMM on the most relevant subset of the training data. This technique can be particularly effective in the case of tagging of words or a bag of words in natural language processing.

The training of the linear chain conditional random fields is implemented using the same dynamic programming techniques as the HMM implementation (Viterbi, forward-backward passes, and so on). Its time complexity for training T data sequences, N labels (or expected outcomes), and M weights/features λ is O(MTN2).

The time complexity of the training...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime
Visually different images