Building an Apptainer Container with Python 3.9 and PyTorch (pip)¶
This guide explains how to create an Apptainer container using Python 3.9 and install PyTorch using pip
.
Requirements¶
- Apptainer installed on your system (
apptainer --version
) - Internet access
- No sudo rights required (supports
--fakeroot
or--sandbox
builds)
Create the Definition File¶
Create a file called pytorch.def
with the following content:
Bootstrap: docker
From: python:3.9-slim
%post
# Update and install dependencies
apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libglib2.0-0 libxext6 libsm6 libxrender1 \
git wget curl ca-certificates && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PyTorch
pip install --upgrade pip
# GPU version
pip install torch torchvision torchaudio
# ...or CPU version
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# Optional: add other Python tools
pip install jupyterlab matplotlib scikit-learn
%environment
export PYTHONUNBUFFERED=1
%runscript
exec python3 "$@"
Note: select either the GPU version of the CPU version and remove the other line in the script above
Build the Container¶
apptainer build --fakeroot pytorch.sif pytorch.def
Use the Container¶
Use the --nv
flag when running or entering the container to enable the use of the GPU in the container. Make sure you activate the CUDA enviroment on the node before using the GPU container:
module load cuda
Enter a shell¶
apptainer shell --nv pytorch.sif
Note: you can leave out the --nv
option if you are not using the GPU
Run Python¶
python3
Or execute scripts directly with apptainer
apptainer exec --nv pytorch.sif python3 myscript.py
Test PyTorch¶
Inside the container:
CPU and GPU¶
python3 -c "import torch; print(torch.__version__); print(torch.rand(2,3))"
GPU¶
python3 -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))"
If everything is set up correctly, this should return True
and the name of your GPU.