Cloud solutions – AWS
Implementing large number data volumes involves using server resources in one form or the other. We have to use on-premise servers or cloud platforms. In this case, it has been decided to outsource server resources and algorithm development to face a large amount of data billion to train.
Several online cloud services offer the solution to machine learning projects. In this chapter, the project was carried out with AWS SageMaker on an AWS S3 data management platform.
Preparing your baseline model
3,000,000 records containing two features, adding up to a 6,000,000 cell matrix dataset, require mini-batches as defined in the next section through hyperparameters.
In the AWS interface, the size of a mini-batch will be a number. To see what it does, let's apply a mini-batch to the Python program example of Chapter 6, Don't Get Lost in Techniques – Focus on Optimizing Your Solutions.
Training the full sample training dataset
In Chapter 6, Don't Get Lost in Techniques – Focus on...