Running fast.ai on WSL 2 using GPU

Estefanía Tenorio
9 min readDec 23, 2020

Recently I started taking the Practical Deep Learning for Coders course on fast.ai. One of the firsts recommendations is to use cloud based servers like Collab or Gradient for running your Jupyter notebooks. But what if you already have a computer that has a decent GPU and you want to use it? The course doesn’t give details on the setup so I took the liberty of documenting my journey.

Recently I got a new laptop with an RTX 2070 so I did not want to miss the chance to give it a stress test of running/training multiple models while taking the course.

Also another thing is that recently I’ve been impressed with the capabilities of WSL, since frequently I run Windows only programs but also I like having that super user control on Linux. So because of that I decided to try to do the setup on WSL.

I ended up following 2–3 different tutorials to finally make it work, so this is a “Franken-tutorial” of how to get there + some steps I added.

Summary

Steps 1 to 7:

“Franken-tutorial” = steps from a compilation of multiple other tutorial. Each of the steps has a link to the original source.

Steps 8 to 13:

Steps I came up with to run fast ai on a Docker container using GPU inside WSL

Notes:

This tutorial makes use of Docker and VS Code, if you don’t have experience using these, it’s OK. I tried to add the necessary instructions so that anyone without much experience could use them. Also I added sections of “Programmer tips” in case you have experience and wish to learn a little bit more.

The steps

1. Install the latest Windows Insider Dev Channel build

Join the Insiders program and get the Dev build version that allows you to use WSL.

https://insider.windows.com/getting-started/#install

2. Install the latest WSL Cuda driver from Nvidia

GPU in Windows Subsystem for Linux (WSL) | NVIDIA Developer

Make sure the driver is installed properly. An easy check is to open ‘Task Manager’ -> ‘Performance’ and checking out that your Nvidia card does show up there.

3. Enable and install WSL 2 on your machine

Follow the instructions here: Install Windows Subsystem for Linux (WSL) on Windows 10 | Microsoft Docs

4. Install and run a Linux distribution — I recommend Ubuntu 18.04

Go the Windows Store and install Ubuntu

5. Open Ubuntu and go through the initial setup

6. Install CUDA related stuff inside WSL

$ apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub

$ sh -c 'echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list'

$ apt-get update
$ apt-get install -y cuda-toolkit-11-0

After this step you can test out using your GPU inside WSL by running the BlackScholes example

$ cd /usr/local/cuda/samples/4_Finance/BlackScholes
$ make
$ ./BlackScholes

If the installation was successful, you should be able to see in the results lines like these ones:

Executing Black-Scholes GPU kernel (131072 iterations)...
Options count : 8000000
BlackScholesGPU() time : 1.314299 msec
Effective memory bandwidth: 60.868973 GB/s
Gigaoptions per second : 6.086897

6. Install Docker inside WSL

If you have Docker installed on Windows, remove it first. The version on Windows does not support yet docker using the GPU.

$ curl https://get.docker.com | sh$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)

$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -

$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container-experimental.list | sudo tee /etc/apt/sources.list.d/libnvidia-container-experimental.list
$ sudo apt-get update

$ sudo apt-get install -y nvidia-docker2

After this step, Docker is installed and ready to be used.

On a separate terminal run the following to Stop/Start the Docker service:

$ sudo service docker stop

$ sudo service docker start

7. Testing Docker containers using CUDA

Run the following command to create a sample Docker container that uses the GPU.

docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

After running it, you should see a similar result:

> Compute 6.1 CUDA device: [GeForce GTX 1070]
15360 bodies, total time for 10 iterations: 11.949 ms
= 197.446 billion interactions per second
= 3948.925 single-precision GFLOP/s at 20 flops per interaction

There are more examples you can try on the link below:

8. FINALLY setting up Fast.ai — no more “Franken-tutorial”

Download the fast ai repo inside WSL.

$ cd ~
$ mkdir repos
$ cd repos
$ git clone https://github.com/fastai/fastbook.git
$ cd fastbook

Open VS Code from WSL (WSL installs VS Code in case you don’t have it)

$ code .

The first time you open VS Code from WSL it will suggest extensions such as Remote - WSL.

Install the following extensions:

  • Remote-WSL
  • Docker

Once you install the Docker extension you’ll notice that the docker containers you ran on the examples previously are there stopped.

Programmer tip:

If you use Docker frequently this extension is pretty useful since it gives a simple UI interface to handle containers, check the status, see images, check logs, etc.

9. Creating a fastai Docker image

On the root of the repo, create a file called Dockerfile with the following content.

It’s OK if you are not familiar with Docker. In case you are curious, what this Dockerfile is doing is using an existing image of Jupyter using GPU and on top of that installing the fast ai dependencies specified in the requirements.txt in the repo.

10. Setting up your VS Code env

  • Creating VS Code tasks for easier use

10.1 CTRL + SHIFT + P, Type Tasks and select Tasks: Configure Tasks

10.2 Select Create tasks.json and then choose ‘Others’

A new file will show up under <root>/.vscode/tasks.json

10.3 Open the tasks.json and add the following tasks

Programmer tip

If you have never used VS Code tasks and you use VS Code in your projects I would recommend taking a look at this feature. VS Code tasks are super useful when working on projects since it allows to check in command line commands for Windows or Linux and have all the commands in the repo. Before making use of VS Code Tasks, every dev in my team had their own file with common commands to run, now we all use the same commands using VS Code Tasks.

What are these tasks?

start docker service: does exactly that. If you recall the setup you ran this command to get Docker running. On WSL, when starting a new session you’ll have to start docker manually. There are ways to do this automatically on WSL boot (adding some commands to your .bashrc) but personally I rather choose when to run Docker since I don’t always use Docker when running WSL. — run every time you start a WSL session and you want to use this setup

create image: it tells docker to create an image from the Dockerfile created previously called fastbook — run once only

start jupyter: it creates a docker container ready to run Jupyter with the WSL using the image created with the create image command — run once and when you delete the container

11. Run the VS Code tasks

To run VS Code tasks do: CTRL + SHIFT+ P, type Tasks and select “Tasks: Run Task”. Once you do you’ll see something like this:

A window with the tasks you previously created on the tasks.json

  • If you already have Docker running skip this, if not run “start docker service”.
  • Then run “create image”.
  • And finally run “start jupyter”.

If you installed the Docker extension you’ll see your container running there and also you’ll see an http address of where you can access your notebook:

VS Code Docker extension

Now you can just click on the URL that shows up on your console and open it on your preferred browser.

12. Opening Jupyter notebooks

Once you click the link generated on the task command line you’ll see something like this:

Click on “notebooks”

Now this looks familiar, doesn’t it? Now you have a copy of the fast ai repo running on the container. Actually it isn’t a copy, those are exactly the files from the repo. You can make changes on VS Code or directly from Jupyter.

13. FINALLY Running a Jupyter Notebook

As mentioned on the Fast AI videos, use the notebooks in the clean folder since those don’t have the results cached.

Open the 01_intro notebook and “TRUST YOUR NOTEBOOK” by clicking on the “Not Trusted” button.

Now run the notebook either step by step or the whole notebook. While you run it you should be able to see that you are making use of your GPU in the Task Manager.

Jupyter notebook running on WSL using your GPU

How to resume when you leave WSL?

  1. Start WSL and open VS Code in the repo (like in Step 8 without the git cloning or creating directories)
cd ~
cd fastai/fastbook
code .

2. On the Docker extension find your container that now is in a stopped state, right click it and click “Start”

3. Find the URL by right clicking in the container that is running and click on “View Logs”. When you do that, you’ll be able to see the URL of your Jupyter notebooks like in step 11.

Troubleshooting

NVIDIA card not found

It has happened to me twice, but just in case it happens to you. Sometimes when running the notebook, at the beginning you’ll get a message saying that Nvidia GPU was not found. If you get this, if you go to Task Manager your Nvidia GPU is gone (not actually, it’s just the driver). Reinstall the driver you installed in step 2 and you are ready to go again.

That’s it, hope you enjoy your Fast ai learning while using WSL. If you found more issues during this setup feel free to leave a comment to keep this post updated.

Original Sources:

Enable NVIDIA CUDA in WSL 2 — Win32 apps | Microsoft Docs

CUDA on WSL :: CUDA Toolkit Documentation (nvidia.com)

--

--

Estefanía Tenorio

Software/Hardware Engineer -Traveler- Explorer-Maker — just wondering around… (wonderer perhaps?)