Chasing Silicon: The Race for GPUs
Ғылым және технология
With the world constantly generating more data, unlocking the full potential of AI means a constant need for faster and more resilient hardware.
In this episode - the second in our three-part series - we explore the challenges for founders trying to build AI companies. We dive into the delta between supply and demand, whether to own or rent, where moats can be found, and even where open source comes into play.
Look out for the rest of our series, where we dive into terminology and technology that is the backbone of the AI, how much the cost of compute truly costs!
Topics Covered:
00:00 - Supply and demand
03:03 - Competition for AI hardware
04:46 - Who gets access to the supply available
06:28 - How to select which hardware to use
08:59 - Cloud versus bringing infrastructure in house
13:26- What role does open source play?
16:48 - Cheaper and decentralized compute
20:16- Rebuilding the stack
21:31- Upcoming episodes on cost of compute
Resources:
Find Guido on LinkedIn: / appenz
Find Guido on Twitter: / appenz
Find a16z on Twitter: / a16z
Find a16z on LinkedIn: / a16z
Subscribe on your favorite podcast app: a16z.simplecast.com/
Follow our host: / stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Пікірлер: 24
Topics Covered: 00:00 - Supply and demand 03:03 - Competition for AI hardware 04:46 - Who gets access to the supply available 06:28 - How to select which hardware to use 08:59 - Cloud versus bringing infrastructure in house 13:26- What role does open source play? 16:48 - Cheaper and decentralized compute 20:16- Rebuilding the stack 21:31- Upcoming episodes on cost of compute
music in the background is silly because people watch these at 2x speed and it sounds terrible and distracting 😄
@Hypotemused
11 ай бұрын
Agreed. We come to listen to the speakers. Why this crappy psytrance behind?
@fintech1378
10 ай бұрын
its actually AI generated
@tschorsch
2 ай бұрын
@@fintech1378 It's still annoying.
20:40 it's crazy that it's kinda what Alan Turing said in "computer machinery and intelligence". What a boss
Good job so far on this series. Really looking for the next episode that explores the actual cost. Will likely help many founders’ with their ‘ask’ slides. Might not help since complex inferencing will be needed for next gen AI platforms.
Catch Part 1 of the mini-series here: kzread.info/dash/bejne/X6eTt8tumpOtpdo.html
This is amazingly informative. Thanks for giving Guido this platform to share his experience!
Thanks Steph, this is really next level content!
amazing content! it is a fun race to watch and see how the supply chain evolves
Great job explains the situation with GPUs
There is a finite amount of fab house capacity, gets eaten up by big companies like Apple, etc. Can’t bring new fabs online quickly, etc. Haven’t seen any projections on how this might slow AI rollouts and work its way back to throttling the growth (and share prices) of companies like Nvidia. Next, this opens the door to competitors, gives them more time to catch up, etc.
17:12 well, this is the measure how quickly things are evolving, now we have Falcon 180B
Semis really seem like new oil, they are crucial for anything related to new technologies.
@a16z - when is part 3 being released?
@a16z
10 ай бұрын
This week!
@Hypotemused
10 ай бұрын
@@a16z⏳
Oh noise! 😮
How about the race for DPUs and QPUs ...GPUs are getting old man.... ever since last year and the Ethereum Switch over to Proof of Steak
Tone down the absurdly annoying music.
fantastic podcast