further instruction on using GPU
This commit is contained in:
parent
4b8af7db46
commit
be14c0ffe1
17
README.md
17
README.md
@ -131,7 +131,9 @@ If you don't use Anaconda, type `python -V` in terminal to make sure python>=3.9
|
||||
*For faster inference, you can run on the GPU. Install pyTorch with CUDA and cuDNN support, and ONNX Runtime with GPU support (not available on MacOS).*\
|
||||
Be aware that GPU support takes an additional 6 GB on disk. The full installation is then 10.75 GB instead of 4.75 GB.
|
||||
|
||||
Go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), check the latest CUDA and cuDNN requirements. Then go to the [pyTorch website]( https://pytorch.org/get-started/locally) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
|
||||
Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
|
||||
|
||||
Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Finally, go to the [pyTorch website]( https://pytorch.org/get-started/locally) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). You may need to opt for a previous version, downloadable [there](https://pytorch.org/get-started/previous-versions/). For example:
|
||||
``` cmd
|
||||
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
|
||||
```
|
||||
@ -140,16 +142,21 @@ If you don't use Anaconda, type `python -V` in terminal to make sure python>=3.9
|
||||
pip install onnxruntime-gpu
|
||||
```
|
||||
|
||||
Check that everything went well within Python with these commands:
|
||||
``` python
|
||||
import torch; import onnxruntime as ort
|
||||
print(torch.cuda.is_available(), ort.get_available_providers())
|
||||
# Should print "True ['CUDAExecutionProvider', ...]"
|
||||
```
|
||||
|
||||
<!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
|
||||
|
||||
> **Note on storage use:**\
|
||||
A full installation takes up to 11 GB of storage spate. However, GPU support is not mandatory and takes about 6 GB. Moreover, [marker augmentation](#marker-augmentation) requires Tensorflow and does not necessarily yield better results. You can save an additional 1.3 GB by uninstalling it: `pip uninstall tensorflow`.\
|
||||
A minimal installation with carefully chosen pose models and without GPU support, Tensorflow, PyQt5 **would take less than 3 GB**.
|
||||
<img src="Content/Storage.png" width="760">
|
||||
|
||||
|
||||
<!-- import torch; print(torch.cuda.is_available())
|
||||
import onnxruntime as ort; ort.get_available_providers()
|
||||
print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
|
||||
|
||||
</br>
|
||||
|
||||
## Demonstration Part-1: Build 3D TRC file
|
||||
|
Loading…
Reference in New Issue
Block a user