Welcome to Tech Jotters (@techjotters24), your premier destination for cutting-edge Python tutorials, AI breakthroughs, and data science projects. Dive deep into the world of technology with our expertly crafted videos designed to empower tech enthusiasts and professionals alike. Whether you're starting your coding journey with Python, exploring the latest in artificial intelligence, or tackling complex data science challenges, we've got you covered. Our content is tailored to demystify the tech world, making advanced concepts accessible to all. Stay ahead of the curve with Tech Jotters (@techjotters24), where innovation meets instruction. Subscribe now to join our community of tech explorers!
Пікірлер
Does it work on other versions of Ubuntu, such as 20.04?
on 17:11 i do get nvcc: command not found error but i could not solve it even with your tutorial :(
Hi, I followed this video and get the following error while trying to run train on a model using yolov8n Could not load library libcudnn_cnn_train.so.8. Error: /usr/local/cuda-12.1/lib64/libcudnn_cnn_train.so.8: undefined symbol: _ZN5cudnn3cnn34layerNormFwd_execute_internal_implERKNS_7backend11VariantPackEP11CUstream_stRNS0_18LayerNormFwdParamsERKNS1_20NormForwardOperationEmb, version libcudnn_cnn_infer.so.8. My GPU is successfully detected by tensorflow
It was a conflict between the cudNN installed locally with pytorch's own cudNN. I managed to get around it by commenting out the paths we manually added. You can uncomment them after running the training
Hi, thank you for the video. I solved these problems with TensorFlow 2.16.1 Thanks again!!!! Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
im getting when i try to run the sudo sh command stating failed to verify the gcc version how to fix it?
Thank you!. This worked!
Thank you are my boss!!! Great video !!
You are welcome!!!
❤❤❤
Thank you!!!!!
Hello, does changing the drivers alter my performance in games? Do this?
sorry, not sure!!!
hello, i followed every procedures of yours, but im installing cuda 11.8 and cudnn 8.6, coz based on tensorflow's website, they should be compatible with my rtx3060. however, i reached an issue in initializing cudnn. after running "./test_cudnn", this error pops out, when i use "ls" i can see the file, but coudnlt open, do you know how to solve this? ./test_cudnn: error while loading shared libraries: libcudnn.so.8: cannot open shared object file: No such file or directory thank you sir
I tried to figure it out, but I don't have this GPU available to test. I am really sorry.
@@TechJotters24 no problem, although that thing didnt work out, but eventually i got it done. However, as gpu was able to be used, i couldnt import any other python libraries like cv2 or mediapipe (pip installed, pip list also shows). So i guess i just decided to give up lol. But your videos are great, keep it up.
Sir can run tensor flow from this in jupyter notebook on my rx6700xt ??
Hi, No it's not work with Tensorflow
Does this work with jupyterlab? In powershell it showed True but in jupyterlab it shows false? I am using same environment, but Cuda 12.5 with pytorch installation link for 12.4 is this the reason cud.is_available() is returning False in jupyterlab?
Hi, Tensorflow 2.16.1 only support Cuda Toolkit 12.3. I also found this issue, powershell shows GPU available but Jupyter not, and basically tensorflow don't get GPU support.
Hummm.. so when are you gong to upgrade to a Mi50?
:D May be never
its showing bad subsitutuion after i added tensor path....this is the error -bash: :${LD_ LIBRARY_PATH}: bad substitution
Hi, i believe the best solution is to check the directory first and remove all the bashrc/Path entries related to this and configure again.
where did u find a 4080 super founders edition? i think that basically all other 4080 super designs look horrible but the founders edition is always like $600 more expensive.
Hi @leazy911, one wonderful morning in Canada, I was looking for RTX 4060 ti super 16GB or RTX 4070 ti super and suddenly on bestbuy, I saw it for pre-order, only 50 of them. I was so confused and after 3-4 hours I saw only 12 left. I didn’t think anymore and ordered it. It looks really gorgeous with all black and white light.
@@TechJotters24 Ah, you're so lucky! I'm happy for you though, it looks stunning!
awesome video , i followed you step by step and it installed successfully.Lmao after 2 months of struggling your video helped alot🙏
You are welcome. I am happy it works for you.
Oo bhai upgrade 😊
Yea!!!!!
Thank you so much bro , I wasted hours and days trying to figure whats wrong. you deserve more support and subscribers.
You are welcome. Really happy to help!!!
You're the software wizard I needed, thank you very much
:D you are welcome
Thank you so much! The only tutorial that has worked so far end-to-end. Thanks again!
You are welcome.
On the step: ./test_cudnn at 11:09 my initialization has failed. Any thoughts on how to handle it?
What errors did you get?
@@TechJotters24 I have the same issue, no errors. I run test_cudnn and it outputs 'cuDNN initialization failed.' All the previous steps worked though. I have an RTX3060 laptop.
@@TechJotters24 No errors before. Everything exactly like the example. Do you have any tips to debug what is happening when I run "./test_cudnn"? I have an RTX 3050 laptop
@@rafamarinho87hi, can you please send me the errors? I’ll check it.
@@TravelJotter24 I am already sending you the errors I have so far. The only thing I have is "cuDNN initialization failed". The script test_cudnn.c states the following "// test_cudnn.c #include <cudnn.h> #include <stdio.h> int main() { cudnnHandle_t handle; cudnnStatus_t status = cudnnCreate(&handle); if (status == CUDNN_STATUS_SUCCESS) { printf("cuDNN successfully initialized. "); } else { printf("cuDNN initialization failed. "); } cudnnDestroy(handle); return 0; }" So if status was CUDNN_STATUS_SUCCESS it would print "cuDNN successfully initialized". In my case status was not CUDNN_STATUS_SUCCESS because it has printed "cuDNN initialization failed". What I am asking is if you have any tips to understand why cuDNN was not initialized. And even tips to debug the initialization so I can send you the errors
Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader. I've seen 4+ GPU setups working for linux environments. Curious if you have any idea why cudnn doesn't work on WSL w/multple GPUs?
After those installation ,How to use tensorflow in VScode ?
Hi @charliesj0129, please check this video - kzread.info/dash/bejne/dICe17SYlaTPdto.html
Very informational video... clear and concise. 👍
Thank you!!!
You are the Boss! You have earned a like and a subscribe!
Thanks for your support!!!
In more than three weeks of watching tutorials and looking for amazing solutions, you were the only one who could help me. Thank you very much.
You are welcome!! Really happy it works for you!!!
You are the new Buddha ...because you are the only one who shows the "right way" !!
Thank you!!!!
Thank you so much. Great tutorial! 👍👍👍👍👍👍👍
You are welcome!!!
Having a little trouble installing cudnn. I used the same cudnn tar and followed the commands exactly, but cudnn initialization keeps on failing :/ Do you have any idea what could be going wrong?
What type of error you are getting?
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) Fixed it by removing one of three GPUs. Not sure why cudnn & the python commands above failed with 3 GPUs but runs with 2 GPUs. Could be a problem with Windows or WSL2 or a hardware limitation, since I've seen some Linux servers with 4+ GPUs Would love to hear if you have any thoughts as to why :)
@@TechJotters24 Running `import torch` and `torch.cuda.is_available()` creates the following error: "UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory" Fixed the error by removing 1 of 3 GPUs. Curious why cudnn fails with 3 GPUs but works with 2 GPUs. Could be a hardware limitation or an issue with WSL2/Windows. I've noticed that my model auto-loader (exl2 running on Win11) can never utilize all 3 GPUs simultaneously for large models, while I've seen Linux servers handle 4+ GPUs before. Wondering if you have any idea what the root cause is! I'd love to figure it out 😃
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
@@TechJotters24 Was getting "Error 2: out of memory" using torch.cuda.is_available(). I removed the 3rd GPU and it worked with 2 GPUs. Probably a limitation on Windows/WSL, since I've noticed loading large models on Windows can't leverage all 3 of my GPUs using exl2 model loader.
Great job! Does Tensorflow 2.16.1 support Cuda 12.1, or does it require Cuda 12.3?
The latest cuda toolkit you can use for tensorflow 2.16.1 is 12.3. But in that case you can’t use TensorRt. If you want tensorrt support, the latest cuda toolkit you can use it 12.1 because tensorflow support only tensorrt 8.6 and tensorrt 8.6 can only work with Cuda Toolkit 12.1. It’s a chain reaction :D
@@TechJotters24 sir i follow the procedure with tf 2.16.1 with cude 12.1 unfortunately it isn’t identify gpu device. I switch to tf 2.15 with new env. It identify the device with many missing dependencies like cublas etc
@@TechJotters24 sir it works I don’t know the reason i just restart my system and make new conda env and just install tf and pytorch thank you soo much sir 😊
@@waqarkaigreat!!!
This also works or RHEL Ur my hero
Hi, I didn’t try it on RHEL but it should work.
Hey, Your video is very good and easy to understand. Could you make an tutorial how to install YOLO V8 for amd in windows 11? There is a tutorial in amd youtube channel about this but it is too hard to understand. Looking for your quick response. Reply
Hi, I am sorry. I’ve changed my GPU to Nvidia.
true legend to solve the problem
Thank you!!
Hey, Your video is very good and easy to understand. Could you make an tutorial how to install YOLO V8 for amd in windows 11? There is a tutorial in amd youtube channel about this but it is too hard to understand. Looking for your quick response.
Hi, sorry I’ve changed my gpu.
Thank you!! Guide worked perfectly.
You are welcome!!!
**THIS VIDEO CAN HELP YOU TO INSTALL ANY VERSION OF CUDA, CUDNN, TENSORRT, AND TENSORFLOW IN YOUR WSL2** Thank you again for creating an updated video on this topic. feedback: kindly fix the chapter's timing it is not in sync.
Hi, I’ll check it
hello my friend i did everything you said , and thank you alot for your effort , i have question when i reached to the part you will install pytorch , i found that i have it on my pc , should i do the same as you did , and why to do that :) another question ^^ will i open conda to be in env everytime i restart my pc or it will be permanent work , as i will use stable diffusion with TensorRt extension
Hi, I installed PyTorch to test whether it can you the GPU or not. If you don’t need PyTorch, you can skip it. It’s only for testing.
you're the best!
Thank you !!!
Thank you!!!
Thanks a lot man, this is the best video. I was trying to install for more than a day and couldn't. U r a saviour 🙏. One thing i want to ask is ur GitHub link u provided has some extra lines of code to execute at the end like sudo rm and few other lines that you don't talk about in the video, should I run them?
Glad I could help
Lol bro, I've been trying to find a solution to this problem for almost a week, I've read many gigabytes of information and visited every possible page on Nvidia's website, and somehow I can't solve this problem, you're my superman, thank you very much)❤❤❤
You are welcome!!!
Thank you SO MUCH! All other resources are sooo behind that I wasted way too much time trying to get TensorRT to show up.
You are most welcome
You are de Guy. Thankyou very much
Happy to help
Thank you soooo much, It's clear and sharp to the point Can you help me how to use the tf environment for local windows folder in vs code
You are welcome. Sure I’ll help you.
Hello, very useful video. Thank you. I want to run Spyder, can you help me with this? Also I get a segmentation fault error when running the model, do you have any idea about this?
Hi, sorry I don’t know about segmentation error. I’ll test spyder and let you know.
I’ve had some problems and I’m thinking it has to do with reinstalling torch. I’m on debian though, so I’ll report back if I get anywhere. Thanks for the video
You are welcome!
You Are a LEGEND man!!! After all that searching you are the one who's done it for me. Keep Going. CHEERS!!!
Glad I could help
Great tutorial! Thank you so much.
You're very welcome!
Thank you so much. Please keep uploading. Learned a lot from you. Respect.
Thank you, I will
Hello mister, wich is the lowest GPU i need to be able to tun this? how many GB of GPU
Hi, stable diffusion models needs minimum 8 gb VRAM, but 12 GB is recommended. But 16gb is recommended for stable cascade.
Hello mister, I want to run some AI programs like stable diffusion and LLM locally, but my GPU is very low i have Nvidia Geforce GTX 1650 with only 4 gb of ram Can this nvidia drivers thing installed with WSL help me to run this kind of AI software? into a windows 11? or what is the real benefit to implement this? thanks a lot for your help
Hi, I don’t think you need to do wsl. Try lm studio on windows. Smaller models will run perfectly and it’ll also suggest you which models are compatible with your system.