Keras - Python Deep Learning Neural Network API
Deep Learning Course 2 of 4 - Level: Beginner
TensorFlow and Keras GPU Support - CUDA GPU Setup
GPU Support for TensorFlow and Keras - How to Run Code on the GPU
In this episode, we’ll discuss GPU support for TensorFlow and the integrated Keras API and how to get your code running with a GPU!
Keras Integration with TensorFlow Recap
Before jumping into GPU specifics, let’s elaborate a bit more on a point from a previous episode.
It’s important to understand that as of now, Keras has been completely integrated with TensorFlow. The standalone version of Keras is no longer being updated or maintained by the Keras team. So, when we talk about Keras now, we’re talking about it as an API integrated within TensorFlow, not a separate stand alone library.
With that being said, because Keras integrates deeply with low-level TensorFlow functionality, we can actually use the high-level functionality of Keras to do many things without being required to make use of lower-level TensorFlow code.
Hopefully that provides a bit more clarity about the integration. Now let’s jump into the main topic of GPU support.
GPU Support for TensorFlow
TensorFlow code, including Keras, will transparently run on a single GPU with no explicit code configuration required.
TensorFlow GPU support is currently available for Ubuntu and Windows systems with CUDA-enabled cards.
In terms of how to get your TensorFlow code to run on the GPU, note that operations that are capable of running on a GPU now default to doing so. So, if TensorFlow detects both a CPU and a GPU, then GPU-capable code will run on the GPU by default.
If you don’t want this to occur for whatever reason, you can explicitly change the device you want your code to run on, but we’ll get into that later in this course when we’re actually running code.
For now, let’s discuss how to enable our systems to allow for TensorFlow code to run on the GPU.
The only hardware requirement is having a NVIDIA GPU card with CUDA Compute Capability.
Check the TensorFlow website for currently supported versions.
Going forward, there are different instructions depending on if you’re running your code from a Windows environment or Linux environment. We’ll mostly go into depth on the Windows side, but first let’s touch on Linux.
To simplify installation and avoid library conflicts, TensorFlow recommends using a TensorFlow Docker image with GPU support, as this setup only requires the NVIDIA GPU drivers to be installed.
TensorFlow has a guide with all the corresponding steps to get this set up.
For Windows, the process is a bit more involved, so we’ll go through all of the steps involved now.
The first step is to have TensorFlow installed.
Last time, recall that we discussed the TensorFlow installation as being as simple as running the command
pip install tensorflow, but note that we also discussed needing to check to ensure
you meet the
TensorFlow system requirements.
One of these requirements if having the appropriate version of a Microsoft Visual C++ redistributable installed.
Without it, you will get error below when you try to import TensorFlow, so be sure that you have this installed, as well as TensorFlow.
C:\Development\Python\Python37\lib\imp.py, line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.
Install Nvidia Drivers
Now, we need to install the Nvidia drivers. Navigate to Nvidia’s website to begin the download.
You will need to know the specs of your GPU so that you can download the appropriate drivers. If you don’t know these specs, you can navigate to the Display Adapters in your Device Manager to get the info you need.
Once downloaded, then run through the installation wizard to install the drivers.
Install CUDA Toolkit
Now we need to install the CUDA Toolkit. Navigate to Nvidia’s website to choose the version you’d like to download.
Be sure to check the CUDA Toolkit version that TensorFlow currently supports. You can find that information on TensorFlow’s site.
After the download completes, begin the installation.
Note, if you do not have Microsoft Visual Stuido installed on your machine, then during the installation, you may get this message:
No supported version of Visual Studio was found. Some components of CUDA Toolkit will not work properly. Please install Visual Studio first to get the full functionality
According to the system requirements for CUDA Toolkit, Visual Studio is a requirement.
If you get this message, then do not move forward with the next step of the install. Instead, navigate to Microsoft’s website to download and install the Community edition of Visual Studio.
Note, that you only need the base package. No additional workloads are required to be chosen during installation.
After installation completes, pick back up with installing the CUDA Toolkit, and you should no longer receive the message regarding the absence Visual Studio.
Now we need to install the cuDNN SDK. Navigate again to Nvidia’s website. To gain access to the download, you must first create a free account and go through a quick email verification.
Next, choose to download the version of cuDNN that corresponds to the TensorFlow-supported version of the CUDA Toolkit that you downloaded in the last step.
After download completes, the installation process requires moving the downloaded files into the appropriate locations within the CUDA Toolkit installation path on disk, as well as verifying environment varables. The detailed steps are discussed here, as well as in the corresponding video.
Verify that TensorFlow Detects a GPU
Open a Jupyter notebook or any IDE of your choice, and run the line of code below to test that TensorFlow has located a GPU on your machine.
import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) > Num GPUs Available: 1
If the output is
1, then TensorFlow has successfully identified your GPU. If the output is
0, then it has not.
If you receive a
0, then check the console from which you started your Jupyter Notebook for any messages. If you receive the error below, verify your CUDA environment variable as discussed in
the cuDNN installation steps, restart your machine, and try again.
Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.dll not found
Once TensorFlow has successfully detected your GPU, that is all it takes for your future TensorFlow code to run on the GPU by default!
Updates to the information on this page!
Did you know you that deeplizard content is regularly updated and maintained?
Spot something that needs to be updated? Don't hesitate to let us know. We'll fix it!
All relevant updates for the content on this page are listed below.
So far, so good! The content on this page hasn't required any updates thus far.