Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Effective Amazon Machine Learning

You're reading from   Effective Amazon Machine Learning Expert web services for machine learning on cloud

Arrow left icon
Product type Paperback
Published in Apr 2017
Publisher Packt
ISBN-13 9781785883231
Length 306 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
 Perrier Perrier
Author Profile Icon Perrier
Perrier
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Dedication
Preface
1. Introduction to Machine Learning and Predictive Analytics FREE CHAPTER 2. Machine Learning Definitions and Concepts 3. Overview of an Amazon Machine Learning Workflow 4. Loading and Preparing the Dataset 5. Model Creation 6. Predictions and Performances 7. Command Line and SDK 8. Creating Datasources from Redshift 9. Building a Streaming Data Analysis Pipeline

Chapter 7. Command Line and SDK

Using the AWS web interface to manage and run your projects is time-consuming. In this chapter, we move away from the web interface and start running our projects via the command line with the AWS Command Line Interface (AWS CLI) and the Python SDK with the Boto3 library.

The first step will be to drive a whole project via the AWS CLI, uploading files to S3, creating datasources, models, evaluations, and predictions. As you will see, scripting will greatly facilitate using Amazon ML. We will use these new abilities to expand our Data Science powers by carrying out cross-validation and feature selection.

So far we have split our original dataset into three data chunks: training, validation, and testing. However, we have seen that the model selection can be strongly dependent on the data split. Shuffle the data — a different model might come as being the best one. Cross-validation is a technique that reduces this dependency by averaging the model performance on...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime
Visually different images