GPU memory management
In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth
option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.
Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in ConfigProto
by:
config = tf.ConfigProto() config.gpu_options.allow_growth = True session = tf.Session(config=config, ...)
The second method is the per_process_gpu_memory_fraction
option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated.
For example, you can tell TensorFlow...