Install Ubuntu 24.04 LTS with WSL + CUDA Toolkit & cuDNN (Miniconda) ✅

Tired of wrestling with complex GPU setups? This video is your one-stop shop for setting up Ubuntu 24.04 LTS with WSL 2 and configuring it for deep learning with NVIDIA CUDA Toolkit and cuDNN using Miniconda.
In this step-by-step guide, you'll learn:
How to effortlessly install Ubuntu 24.04 LTS within Windows using WSL 2.
Installing the latest NVIDIA CUDA Toolkit compatible with WSL.
Downloading and configuring cuDNN for seamless GPU acceleration in your deep learning projects.
Setting up a Miniconda environment for streamlined dependency management.
By the end of this video, you'll have a powerful development environment ready to tackle your next AI and Machine Learning projects!
Keywords: Ubuntu 24.04 LTS, WSL 2, CUDA Toolkit, cuDNN, Deep Learning, Machine Learning, Miniconda, NVIDIA GPU
Want to see more deep learning tutorials? Subscribe for more content and leave a comment below if you have any questions!
##############################
sudo apt update
sudo apt upgrade
sudo apt install -y build-essential
CUDA Toolkit:
wget developer.download.nvidia.com...
sudo sh cuda_12.4.1_550.54.15_linux.run
Path:
nano ~/.bashrc
export PATH=/usr/local/cuda-12.4/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
source ~/.bashrc
cuDNN
wget developer.download.nvidia.com...
sudo dpkg -i cudnn-local-repo-ubuntu2204-9.2.0_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-9.2.0/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cudnn-cuda-12
Miniconda 3.10
wget repo.anaconda.com/miniconda/M...
##############################
Video Chapters with Timestamps:
00:00 Intro
01:17 WSL - UBUNTU 24.04 Install
03:10 Prepare Ubuntu for CUDA
05:06 CUDA Toolkit Install
07:06 Configure CUDA Toolkit Path
08:35 cuDNN Install
10:43 Miniconda Python 3.10
11:56 Conclusion
12:35 Outro

Пікірлер: 8

  • @lowkeyleanin
    @lowkeyleaninАй бұрын

    YOU ARE SUCH A LEGEND MAN!

  • @TechJotters24

    @TechJotters24

    Ай бұрын

    Thanks

  • @MalikKayaalp
    @MalikKayaalpАй бұрын

    Hello. Thank you for your response. I successfully managed to get Tensorflow 2.16.1, Cudatool 12.3.4.1, Cudnn 12.8.9.7.29, and Python 3.11.9 versions working together smoothly. I am pursuing this as a hobby and am relatively new to the Linux environment. Yes, I am finding it a bit challenging, but it is enjoyable. Currently, I am encountering a common error: "could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node. Your kernel may have been built without NUMA support." This error or warning has not posed any obstacles for model training or testing. I have installed the system on Ubuntu 24.04. Best regards.

  • @TechJotters24

    @TechJotters24

    Ай бұрын

    Great. You can ignore This NUMA error!!

  • @MalikKayaalp

    @MalikKayaalp

    Ай бұрын

    @@TechJotters24 I couldn't fix the NUMA warning, but there was an error before that prevented TensorFlow from working. I set a parameter to zero, and TensorFlow started working. When I asked ChatGPT, it told me that I needed to enable the NUMA parameter from the BIOS. I checked my BIOS, but I couldn't find anything like that. So, this needs to be researched. It seems that NUMA is related to the motherboard, but I'm not sure how accurate that is.

  • @TechJotters24

    @TechJotters24

    Ай бұрын

    I think it’s better to avoid Numa working. You did a great job. Can you share the process you use to configure tensorflow? It’ll be great.

  • @MalikKayaalp
    @MalikKayaalpАй бұрын

    Will TensorFlow and PyTorch work with these versions?

  • @TechJotters24

    @TechJotters24

    Ай бұрын

    Hi, PyTorch will work but Tensorflow will not. But I find a way to run tensorflow with conda, which will install all the gpu dependencies and run tensorflow gpu. But remember, it'll not use the latest versions. conda create -n tf-gpu tensorflow-gpu conda activate tf-gpu python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"