Distributed training on AWS deep learning AMI 9.0
So far, we have seen how to perform training and inferencing on a single GPU. However, to make the training even faster in a parallel and distributed way, having a machine or server with multiple GPUs is a viable option. An easy way to achieve this is by using AMAZON EC2 GPU compute instances.
For example, P2 is well suited for distributed deep learning frameworks that come with the latest binaries of deep learning frameworks (MXNet, TensorFlow, Caffe, Caffe2, PyTorch, Keras, Chainer, Theano, and CNTK) pre-installed in separate virtual environments.
An even bigger advantage is that they are fully configured with NVidia CUDA and cuDNN. Interested readers can take a look at https://aws.amazon.com/ec2/instance-types/p2/. A short glimpse of P2 instances configuration and pricing is as follows:

P2 instance details
For this project, I decided to use p2.8xlarge
. You can create it too, but make sure that you have already submitted an increased limit...