diff --git a/README.md b/README.md index 600bece..49a100e 100644 --- a/README.md +++ b/README.md @@ -131,7 +131,9 @@ If you don't use Anaconda, type `python -V` in terminal to make sure python>=3.9 *For faster inference, you can run on the GPU. Install pyTorch with CUDA and cuDNN support, and ONNX Runtime with GPU support (not available on MacOS).*\ Be aware that GPU support takes an additional 6 GB on disk. The full installation is then 10.75 GB instead of 4.75 GB. - Go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), check the latest CUDA and cuDNN requirements. Then go to the [pyTorch website]( https://pytorch.org/get-started/locally) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example: + Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)). + + Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Finally, go to the [pyTorch website]( https://pytorch.org/get-started/locally) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). You may need to opt for a previous version, downloadable [there](https://pytorch.org/get-started/previous-versions/). For example: ``` cmd pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 ``` @@ -140,16 +142,21 @@ If you don't use Anaconda, type `python -V` in terminal to make sure python>=3.9 pip install onnxruntime-gpu ``` + Check that everything went well within Python with these commands: + ``` python + import torch; import onnxruntime as ort + print(torch.cuda.is_available(), ort.get_available_providers()) + # Should print "True ['CUDAExecutionProvider', ...]" + ``` + + + > **Note on storage use:**\ A full installation takes up to 11 GB of storage spate. However, GPU support is not mandatory and takes about 6 GB. Moreover, [marker augmentation](#marker-augmentation) requires Tensorflow and does not necessarily yield better results. You can save an additional 1.3 GB by uninstalling it: `pip uninstall tensorflow`.\ A minimal installation with carefully chosen pose models and without GPU support, Tensorflow, PyQt5 **would take less than 3 GB**. - -
## Demonstration Part-1: Build 3D TRC file