36

Using tensorflow-gpu 2.0.0rc0. I want to choose whether it uses the GPU or the CPU.

Florin Andrei
  • 1,120
  • 1
  • 9
  • 13

3 Answers3

71

I've seen some suggestions elsewhere, but they are old and do not apply very well to newer TF versions. What worked for me was this:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

When that variable is defined and equal to -1, TF uses the CPU even when a CUDA GPU is available.

Florin Andrei
  • 1,120
  • 1
  • 9
  • 13
  • 2
    I actually find it more convenient to set the environmental variable from outside the script. Sometimes I might forget it in a script and when I import something from that script it automatically runs it and I have no GPU. – Djib2011 Sep 07 '19 at 22:39
  • 1
    At 2.1 (or possibly before) up to nightly, set that environment variable to an empty string to disable GPUs – Robert Lugg May 21 '20 at 23:09
  • Thanks you so much for this! – ABIM Jul 25 '20 at 09:52
  • 2
    TensorFlow still uses GPU even after adding this snippet. I spotted it by running nvidia-smi command from the terminal. The corresponding Python runtime was still consuming graphics memory and the GPU fans turned ON when I executed my code. – hafiz031 Nov 20 '20 at 22:21
  • 1
    After a long fight with "Failed to get device attribute 13 for device 0" error at the computer with weak graphic card, it fixed the issue. Thank you. – Darqer Nov 30 '20 at 14:24
  • @hafiz031 do you have a better way to disable tf from using the gpu? – IntegrateThis Dec 10 '20 at 09:25
  • @IntegrateThis does it help?: https://stackoverflow.com/questions/45544603/tensorflow-how-do-you-monitor-gpu-performance-during-model-training-in-real-tim – hafiz031 Dec 10 '20 at 11:14
  • Correcting my previous comment: "..The corresponding Python runtime was still consuming graphics memory and the GPU fans turned ON when I executed my code" - alright! I recently found this inconsistent behavior doesn't show up always. – hafiz031 Dec 10 '20 at 11:22
  • When I try this, my GPU use goes down a little but it uses no GPU memory. In small keras jobs my overall speed actually goes up (I assume GPU memory is not used). – user3660637 May 26 '21 at 14:28
  • @RobertLugg On tensoflow-gpu version 2.6.0 (windows), setting to an empty string still used the GPU, but setting it to -1 disabled them from being used. I don't know why it is inconsistent between versions. – Steven Magana-Zook Jan 12 '22 at 17:18
  • 4
    Make sure to run this before importing tensorflow. If you run this afterwards, it will use GPU even if this variable is set. – Leandro Gomide Jul 11 '22 at 14:48
16

For TF2:

try:
    # Disable all GPUS
    tf.config.set_visible_devices([], 'GPU')
    visible_devices = tf.config.get_visible_devices()
    for device in visible_devices:
        assert device.device_type != 'GPU'
except:
    # Invalid device or cannot modify virtual devices once initialized.
    pass
tttzof351
  • 261
  • 2
  • 2
6

I find setting the variable outside the script easiest and something that always works.

export CUDA_VISIBLE_DEVICES=''

Run this on the command line before running your python script.

momo
  • 163
  • 1
  • 5