Building a GPU cluster for AI
Ғылым және технология
Whitepaper: lambdalabs.com/gpu-cluster/ec...
Learn, from start to finish, how to build a GPU cluster for deep learning. We'll cover the entire process, including cluster level design, rack level design, node level design, CPU and GPU selection, power distribution, storage, and networking.
This talk is based on the Lambda Echelon GPU Cluster whitepaper. The whitepaper can be found above.
Slides for the talk can be found here:
files.lambdalabs.com/How%20to%...
Errata:
- Slide 46 contains an erroneous diagram with a connection from the storage server to the compute fabric network, the storage server does not connect ot the compute fabric network. The correct diagram is available in the whitepaper.
Пікірлер: 48
Extraordinary presentation. Covered all the important topics in depth and with real teaching talent. Many thanks!!
Thank you. You got me started years ago with your lambda stack -- the only way I could get TensorFlow installed on Linux.
Most professional and holistic explanation I heard about this topic. Thank you so much!!
Thank you for highlighting an underrated topic/options that company should re-consider within their compute infrastructure.
Very expert suggestions for hpc and compute sizing.
Really good analysis and presentation!
Thanks for the video.
Its nice to see a holistic explanation of designing / building / installing a complex multi-rack system...As someone that has spent years working on both sides of the "analog/digital divide" (physical data center world / digital world's various segments), the un-sexy physical aspects of available rack space / power / cooling / floor loading / network uplink bandwidth are often overlooked (often assumed)...A semi arrives with a pallet: "Hey Carl, you can have this online in a couple days, right?"
@lambdacloud
2 жыл бұрын
Hey Carl, thanks for the kind comment. Glad you like the video. It's always funny how difficult it can be to 'bridge the divide' between the physical world and virtual world. Many SWEs expect to be able to "spin up" 1000 servers with an API call and forget that there are actual physical objects and tons of people that actually make that happen when you're on-prem.
Thanks. I’m planning on building a “massive” 2 GPU system for home use.
@fundoo203
8 ай бұрын
How did it go man? I also want to build something like that and then stumbled on this video, which is excellent
very informative , thank you.
Really complete, thank you!
Lots and lots of A100 GPUs. Every single one of them is a monster, almost 2x faster memory than the next best GPU. An entire room full of A100 racks... holy cow.
very informative, thanks!
Genius bait and switch. Props!
@metal_mo
Жыл бұрын
Lambda needs an explanation on the difference between "building" and "designing".
Excellent.
Great insight!
This was amazing. Thank you.
thanks for the inspiration
Highly appreciated...KZread should have a separate category called Founder's video.
Hey Stephen, this is highly informative. I work on this clustering. Now am able to connect the dots and get the bigger picture. where can i read about the relationship between numa topology and GPU peering capability.
I have three computers, and a nas, and a external hub. I think that I don’t need a another server because of the NAS. As far as my architecture goes, is there anything else that you can advise?
What if I have a model that I just want to run as provided, it hasn't really been optimized to run around the cluster and has memory requirements greater than any individual system I have. I feel safe to assume that for that specific case a shared distributed memory model would be the solution to run that specific app, yes? Is there any distribution of Linux that has support for such a memory model? It doesn't have to be a full-blown single system image. Perhaps a patch to the memory management driver so storage can be treated as an extension of system memory and not swap memory? Does any such software exist?
it is a lecture more than a tutorial, Thx.
I just love this kind things. How do i can start this kind bussnes how i can find customer for like small node and start building up
I want to build a multi dual epyc 7742 based system for goofing around learning this stuff.
Your are insane, thank you
Do you guys have a gpu cluster optimized for 3d rendering.
Very Based
Tell me how difficult it is so i can buy your solution kind of talk
Does lambda products (gpu cluster) ship with a manual to help you set up the servers for use
Our group ordered around 10 lambda PCs 1 year ago. Right now more than 5 have problems. Some of them do not start up. Mine gets stuck randomly....
@yugr
3 жыл бұрын
Have you tried looking into the reasons?
@lambdacloud
3 жыл бұрын
Meng Xu, you can email support@lambdalabs.com 24/7 or call +1 (866) 711-2025 during business hours. Sorry to hear you're having issues, I'm sure we'll be able to resolve them quickly.
@danielleza908
Жыл бұрын
Our team has 5 lambda laptops, they work perfectly for over a year now.. We also have a workstation with 3 GPUs, works great too.
Looking for work would love to help
Hell yes Lambda Lambda Lambda.
Does it work in man????
Still most relevant today, 2 years later. Thanks.
Now if only I was a billionaire so I could make use of this great information...
Half Life man!
headeggs
Talk about what ur expert.. don’t talk useless stuff without knowing all facts
This dudes in full submission mode . Sad
speak UP