100x Faster Than NumPy... (GPU Acceleration)
Ғылым және технология
CHECK OUT MY NEW UDEMY COURSE, NOW 90% OFF WITH THIS CODE:
www.udemy.com/course/python-s...
Code:
github.com/lukepolson/youtube...
Old Video:
• Billiard Balls and the...
Get Your Billy T-Shirt: my-store-d2b84c.creator-sprin...
Discord: / discord
Instagram: / mrpsolver
Пікірлер: 101
UPDATE: Thanks to @swni on Reddit for the suggestion to use the `ids_pairs` array to index to get `x_pairs` and `y_pairs` as opposed to reusing the `torch.combinations` function. This reduces the simulation time required for 10000 particles to only 20 seconds (about half what is shown in the video). Code has been updated on GitHub! To compare NumPy and PyTorch fairly under these new conditions, I simulate 5000 particles in each case. PyTorch takes 6.3 seconds to run (remember, it also has around a 2 second overhead), while NumPy takes about 823 seconds, indicative of about a 100x increase.
@ibonitog
Жыл бұрын
Could you test CuPy please?
@MrHaggyy
Жыл бұрын
There must still be a lot of potential. A GPU calculates 1080x1920 ~2Mio RGB value per frame. You don`t need to check n^2 combinations for collision, n! should be enough because if P1 collides with P2, P2 also collides with P1. Especially something like checking for collision can be blazingly fast on a GPU. Your 3070 has over 5000 cores and each one has SIMD instructions. So you can do about 20k fp ops per clk. I would check the particles for collision when creating the pairs. You have the function anyway so it`s an easy-to-fix bug.
Bro please never stop doing physics videos, they are amazing! I know they are not the most popular videos in your channel but they are super helpful for someone that only had one programming subject and was with Fortran :( . Greetings from the Dominican Republic! haha
It always catches me off guard to see non-meme videos from you. I am more into web-interacting services rather than data manipulation/science-so async is my wheelhouse rather than this stuff. Still fascinating to watch.
Awesome vid! Love seeing Pytorch being leveraged for its first class GPU support for things other than machine learning. If I recall correctly, someone had a blog post about using pytorch to optimize a shape for rolling (i.e. reinventing the wheel) and it used pytorch, super funny, but cool. Great video!!
Congrats! Great video! Please, don't stop, your videos are incredibly didactic! I allways cite your channel to my students in my Classical Dynamics classes.
Very interesting content and I really appreciate the way you show both notebooks side by side to compare the results. Thank you very much!
Nice I've been waiting for this one ! thanks , looking forward to seeing the next ones
first non-humour vid i see and its awesome! will try to learn more! thank you professor!
I'm honored that your stuff comes up on my feed. Amazing work!
I am a numerical physicist, and this will be very helpful for me. I am currently running all my simulations om CPU (though using MPI for parallellization)
@geoffreyanderson4719
Жыл бұрын
Please learn about nvblas and openblas and code vectorization in my other comment here today. The two keys are writing vectorized numpy or pandas code, plus activating the nvblas or openblas subsystem. Let me know if want help.
@jawad9757
Жыл бұрын
You may want to take a look at CUDA C++ if you have Nvidia GPU(s) and are concerned with performance
I absolutely love this. I'm making my own game engine (fun hobby, tbh) with OpenGL, numpy and Python, and for some time I've thought about where to simulate my physics. This is an eye-opener, and it looks fun as heck! Espec. the matplotlib animation for some "lazy" collision simulations. This vid brings me straight to my college days
Great video, thanks! Consider using indexing by the coordinates of particles in space. The idea is that the coordinates of the particles are rounded to the size of the box, and the collision check occurs only for those particles that are inside the same box. This usually reduces the number of pairs by 90%.
Amazing content. I had a professor when I was in the physics degree that told us about the power of GPU when coding "big numbers". The GPUs have up to 1000 more ("dumb") cores than the CPU and that can be really powerfull. I am now working on my PhD and I use python to do the work. I think that I can learn a lot from you. Thank you!
Very interesting, would love to see more of this!
Great educational video, mate! I'm a CS Grad student and was beginning to get to the later ML courses. Your explanation and side-by-side logic demonstration with Numpy convinced me to do a bit of research and switch from TF to Pytorch! Thanks so much!! I eagerly look forward to the next video!
Super cool, your meme videos are hilarious but this quality content is why I subbed in the first place
Dude a GPU accelarated python series would be amazing 😍😍😍
Wow this is really interesting! Thanks! Waiting for more videos
Oh damn, this is what my thesis is on! Good to see that some great resources are being put out for it
I hope you continued this series
Great to see ya again mate
Nice video, did not think about using pytorch to replace Numpy, but it makes perfect sense for parellelizing numpy code👍. Just a quick tip for additional speedup. Instead of comparing the distance directly you can compare the squared distance for collision detection, this avoids using the square root function which is "slow" at least compared to all the dot products, though it might not matter much for simulations of this scale.
GPU takes advantage of linear operations. So I'm not really sure, but if you use some data structures like quadtree the complexity of the computation might drastically simplify. And you won't need to calculate all n² distances. In fact most particles are not collading with each other. One need to test it, but with that CPU might still outperform the overhead of the GPU, since there won't be that many computations.
Very good intro video into GPU programming, it even gave me couple of ideas. One question if I may. Why wouldn't you do the simulation with event driven algorithm, since that would save a lot of resources and you can avoid overlaps of particles (ie the need to choose small timesteps). I get this is a tutorial/introduction video, but that implementation would be very interesting as well!
It's so relaxing so see someone else's explanation. I'm so tired of doing work in graduate school XD
Thank you so much , I was already using PyTorch for something, but I couldn't figure out how to create the equivalent of the "x_pairs" array I needed to use, thanks.
Multiprocessing library also helps to utilize all available threads. I was generating a mandlebulb and it went from 4 minutes to 1 minute when I optimized code for using it.
Sweet topic. Thank you!
Definitely a must watch!
Nvblas can be used by numpy. Nv stands for nvidia. Just configure your host a bit, which is easy. Openblas can also be used by numpy, which is more common. By default, your linux is using a gnu blas which is super slow by comparison. Nvblas uses the gpu for the linear algebra operation s in your numpy code. Just be sure to write vectorized numpy code , not for loops. You don't change your application code at all, which is a big benefit for ease of maintenance. Openblas will recruit all your cpu cores and implicitly parallelize your matrix matg, greatly speeding it up, as will nvblas.
Nice video! I’ve also got a question in the part of calculating whether particles collide with each other. Is there any advantage of the video’s method compared to use: DIS = torch.cdist(points,points) DIS = torch.triu(DIS, diagonal = 1) Pairs = DIS.nonzero() Or, they are having the same computational complexity?
@MrPSolver
Жыл бұрын
Never seen "torch.cdist" before! Thank you for this comment. Huge reason why I post videos like this...to learn more from the comments :)
Could another distribution really come up for different potentials around the particles? I thought of the Boltzmann distribution as a thermodynamic necessity due to maximization of entropy
Thanks, now I finally will have one reason to tell my dad to buy me a graphics card😂
This is brilliant!
Thanks for the video ! You are awesome ! I have a question. Is it possible to use pytorch to optimize a code with a lot of functions from scipy ? Like solving a lot of differential equations, nonlinear equations, interpolating and integrating functions all in one big code. I'm currently optimizing my code with the joblib library to run it in parallel.
torch by default includes in each tensor the telemetry necessary to calculate the derivative for the error back propagation algorithm. Use the require_grad=False parameter, this will speed up the calculation even more. x = torch.randn(3, requires_grad=False)
love it, keep it up :)
thank you for this!
Just recently got familiar with multithreading so I guess this is the natural progression
If this is all about optimization, you should probably compare the sqared distance between particles to eliminate the need to calculate the square root. :)
perfect intro to torch for someone who is familiar with numpy
thats so cool need this more in field of quantum chemistry❤❤
Great video :)
12:00 why don't you simply use torch.cdist (if you have a batch of vectors, otherwise use torch.pdist) which calculates the p-norm (p=2 in your case) distance between each pair of the two collections of row vectors. This is supposed to be much faster than your code, even though I didn't test it.
I frequently do so-called "model-fitting" using MCMC (or anything good enough), where each set of data consists of 1k-10k data. I wonder whether this could benefit from GPU acceleration or the overhead would be too much.
this is really cool!
Why was it 'bad' that some of particles were colliding in the initial conditions?
Maybe I add that you can use AMD GPUs but currently only in Linux (as Nvidia have CUDA, AMD have ROCm)
can you suggest me books for this relevant problems of laplace transform via python
I think operations on Pythorch Tensor are also faster than on Numpy arrays both on cpu
It's all great but one thing. I don't need your face (person) sitting in fron do 2 screens and covering them 😂
What are libraries that must be imported?
Great video
Can You Make video on the PyOpenCl
PLEASE MAKE A TUTORIAL ON HOW TO HANDLE BIG INTEGERS (>64INT) ON THE GPU 🙏
What program are you using here that let's you put notes in the code like this?
@MrPSolver
Жыл бұрын
VSCode and Jupyer Notebook!
GTX 3070? Do you mean rtx?
@MrPSolver
Жыл бұрын
Haha ya 😂
Torch jit and torch compile is a lot faster than just torch
So my intuition of rewriting stuff in pytorch just for fun was not unreasonable after all!
2 minutes into the video, what about the performance of pytorch compared to numpy in CPU? is it faster there also !? have you tried numba !?
Oh so good
Nice sales pitch for Microsoft Visual Studio. Had `cuda` up and running nicely in a previous installation of Visual Studio. All of that had been with C++ in mind. So Python was not really considered at that time. Was hoping to do the same with PyCharm and Browser-based Notebook using Python exclusively. That's when it got confusing to the point of dropping the idea. 😒
What about cupy (CUDA drop-in replacement for numpy)? Is the performance uplift comparable to pytorch?
@teslapower220
Жыл бұрын
Yes...
Very nice video
"GPU, wich most people have acess today" Looks like we've got some serious worldknowing issue going on here
@MrPSolver
Жыл бұрын
Lots of free nodes you can use here that have access to GPU resources: colab.research.google.com/
When I try to run your code, I get the error message: No module named 'torch' What am I doing wrong?
Would this work on a RX 6800 or intel Ark 770?
@baldpolnareff7224
Жыл бұрын
If I remember correctly pytorch runs on AMD and intel arc as well
I can't get accepted into your discord. In the two lines ani.save()... I get a file not found exception. I am using Python 3.11.3. I really like the article and video. Thanks
@mikejohnston9113
Жыл бұрын
I found the problem, I hand not installed python-ffmpeg. It is fixed now. Thanks
What!!!!!!!!!!!!!!!!!!!!!!!!!
Does Pytorch have numerical integration capabilities?
@user-xh9pu2wj6b
Жыл бұрын
there's torch.trapz which does exactly that
@infiniteflow7092
Жыл бұрын
@@user-xh9pu2wj6b That's cool. Any idea how its accuracy compares to say scipy.quad function?
Does GPU mean NVDIA GPU specifically? Will we ever have libraries utilizing ANY general GPU?
@bryce07
Жыл бұрын
it works with any GPU
I mean you start by saying most people have access to a GPU these days and this is absolutely true. But plenty don't have an NVIDIA GPU and as I understand it pytorch doesn't support non NVIDIA gpus? might be worth re writing this with pyopencl.
Where billy
wowowowoowwww
0:03 "If you coded in python before" while showing a screen full of braces
Hey Nvidia did i miss the new gtx 3070 !!
Gtx 3070? Is this from china!? 🤣😂
i'll break your comment bar with C++
soo instead of using the poor man´s version of Fortran for such calculations, just use Fortran. It is not only perfect for arrays but also natively parallel. You can even make a python wrapper if you want some gui to please the eye. But , I get it, it would not be cool for the kids on youtube...but if you really need efficiency , give it a try.
Damn, he used a Deep Learning Framework to replace Numpy, a Mathematics Framework :v
The entire PART 1 can be more efficiently rewritten in one line: d_pairs = torch.pdist( r.T )
#Include int main() { std::cout
the RTX 3070 is already considered mid range??!! 🥲
Use transfer function its billion times faster: h = np.random.rand(10000) idx = np.arange(10000) X = X_train[idx].astype("float32")/255.0 yt = y_train[idx] + 1 # 1..10 x = X.mean(1) ids = np.argsort(x) i=0 while True: err = yt[ids] - x[ids] * h h += 0.1*err print(np.mean(err**2)) i+=1 if np.mean(err**2)