How much energy AI really needs. And why that's not its main problem.

Ғылым және технология

Learn more about Neural Nets on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.
Artificial Intelligence consumes a lot of energy, both during training and during operation. We’ve heard a lot about this. Indeed, Sam Altman the CEO of OpenAI recently said that we’ll need small modular nuclear reactors just to power all those AIs. Well, hold that thought. Today I want to look at how much energy these AIs really need and explain why I think this isn’t the main problem.
The new paper with the energy estimates is here: arxiv.org/abs/2311.16863
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #technews #tech

Пікірлер: 1 500

  • @mqb3gofjzkko7nzx38
    @mqb3gofjzkko7nzx382 ай бұрын

    OK super intelligent AI, solve our energy crisis: -AI shuts itself off

  • @jimmurphy6095

    @jimmurphy6095

    2 ай бұрын

    Alternatively.... -AI shuts US off. Both results will be more and more equal answers before long.

  • @Aspencio

    @Aspencio

    2 ай бұрын

    @@jimmurphy6095AI shuts US off global economy declines massively and requires decades to bounce back

  • @nayans710

    @nayans710

    2 ай бұрын

    This is gold!

  • @nicejungle

    @nicejungle

    2 ай бұрын

    It reminds me when engineers tried to train an AI to play tetris as long as possible without losing. The AI paused the game 🤣

  • @gavinlew8273

    @gavinlew8273

    2 ай бұрын

    Super AI: Vote me into government first!

  • @MarksElectricLife
    @MarksElectricLife2 ай бұрын

    Makes my brain look incredibly efficient! It can compose sentences, draw pictures and drive cars and it only consumes a few sandwiches and a beer for energy.

  • @simongross3122

    @simongross3122

    2 ай бұрын

    But years and years of training and feeding before you reached those achievements. But yeah, brains are still better.

  • @bugsbunny8691

    @bugsbunny8691

    2 ай бұрын

    So did he, and on beers and sandwiches.

  • @davejones542

    @davejones542

    2 ай бұрын

    AI is better than you already

  • @sportbikeguy9875

    @sportbikeguy9875

    2 ай бұрын

    ​@@simongross3122maybe if he started feeding it beer at a younger age the training would have taken less time 😂

  • @simongross3122

    @simongross3122

    2 ай бұрын

    @@sportbikeguy9875Or at least he'd think it did :)

  • @pirixyt
    @pirixyt2 ай бұрын

    Sam didn't mention AI fully depends on a small nation called Taiwan where all Nvidia chips are produced. So AI depends on energy, money, and actual political stability.

  • @yeroca

    @yeroca

    2 ай бұрын

    TSMC is building fabs in multiple countries now.

  • @alwaysyouramanda

    @alwaysyouramanda

    2 ай бұрын

    Catherine’s proof of life is going to fund Sam’s project

  • @Amadeus8484

    @Amadeus8484

    2 ай бұрын

    China has produced chips made out of Graphene, so the US attempts to turn Taiwan into Ukraine won't affect our chip supply, despite Brandon or Trump's best efforts...

  • @Redflowers9

    @Redflowers9

    2 ай бұрын

    So thats why the UK is "randomly" sanctioning China

  • @hermannabt8361

    @hermannabt8361

    2 ай бұрын

    He did. He asked for $7 trillion to correct this.

  • @logieman777
    @logieman7772 ай бұрын

    so brains are actually far cheaper even if not perfect: maybe we should start training those

  • @Iv_john_vI

    @Iv_john_vI

    2 ай бұрын

    training them is even more expensive... like paying 20 years rent, food, electricity, books... then they leave.

  • @oksanakaido8437

    @oksanakaido8437

    2 ай бұрын

    Well, there's already companies researching using brain organoids to do computations.

  • @nsshing

    @nsshing

    2 ай бұрын

    @@Iv_john_vII think what he meant was raw human brain plugged into the servers💀

  • @daydays12

    @daydays12

    2 ай бұрын

    Especially in the USA Have you seen the interviews with Maga cultists... they make even AI seem intelligent in comparison!!!! PS AI is not intelligent...just artificial. It has zero common sense ..like the Maga cultists

  • @whatsupbudbud

    @whatsupbudbud

    2 ай бұрын

    😂

  • @badroad1000
    @badroad10002 ай бұрын

    3:21 a picture really is worth a thousand words

  • @icaleinns6233

    @icaleinns6233

    2 ай бұрын

    don't you mean a thousand dollars? 😂

  • @stevengill1736

    @stevengill1736

    2 ай бұрын

    ​@@icaleinns6233 Depends on if it's drawn by GTP-4 or GTP-5.... ;*[}

  • @badroad1000

    @badroad1000

    2 ай бұрын

    @@icaleinns6233 I said what I meant.

  • @daydays12

    @daydays12

    2 ай бұрын

    love it!! Yes...that's about it!

  • @boring7823

    @boring7823

    Ай бұрын

    Hmmmmm, yaesssss, not peer reviewed huh?

  • @rogercarlson2319
    @rogercarlson23192 ай бұрын

    1950s: Computers are so expensive that there will only be a few owned by big companies. 2024: AI is so expensive that there will only be a few owned by big companies.

  • @daydays12

    @daydays12

    2 ай бұрын

    both are true . That's what Sabine says... We'll have to pay for AI as we do now for computing power. No change ...big corporations make big money except Trump's new quoted corporation which makes billions from thin air. No AI needed just real unintelligence ( Maga cultists)

  • @danielmethner6847

    @danielmethner6847

    2 ай бұрын

    Well summarized

  • @naXzele

    @naXzele

    Ай бұрын

    Now imagine what kind of AI will those big companies own if everybody has a small portable AI (aiPhone if you will) at their disposal. i.e. supercomputers of today = super AI of tomorrow.

  • @elisaelisaross

    @elisaelisaross

    Ай бұрын

    The use of AI will be made accessible for the masses (as long as they can pay for it), but the providers of the AI services will be just a few companies. Exactly like today most people own a computer, but very few people own a company that produces computers.

  • @maniacslap1623

    @maniacslap1623

    Ай бұрын

    Yeah Sabine came at this from a negative viewpoint. It’s not hard for good tech to catch on. I’m 32. We went from having desktops only in the library to labtops in most classes in 2 years. We all had smartphones by middle school lol Our phones are very powerful AI in its own right. Take privacy laws out the equation. Your phones and labtops could share data indiscriminately. Now imagine them having the ability to wirelessly split the processing load. Rudimentary yes. Very smart and powerful yes. Long way from being capable? Not at all. Her point about will power was valid tho. We lack the discipline as a society. That’s mostly due to politics but still. Imagine if we’d allowed NASA to keep pushing the envelope after Apollo. We’d already be mining asteroids opposed to just now starting 50 years later.

  • @TedSeeber
    @TedSeeber2 ай бұрын

    AI's real biggest problem: garbage in/garbage out

  • @deadeaded
    @deadeaded2 ай бұрын

    The biggest problem, I think, is that we're gradually coming to realize that having a fancy auto-complete isn't worth much if you care about things like truth and reliability. No one with any sense is going to put their trust in a machine that hallucinates.

  • @johnwollenbecker1500
    @johnwollenbecker15002 ай бұрын

    I’ll just continue to chuckle at the way the algorithms go off the rails because of programming bias.

  • @MyMy-tv7fd

    @MyMy-tv7fd

    2 ай бұрын

    sometimes I can get exactly what I want - a straight answer to a factual problem (eg, estimate population of India in 1612; or, minimum adequate temp for growing rosemary plants indoors), sometimes, on the same question later on it it will hedge and qualify to a degree amounting to plain obstruction. But it falls flat on its face everytime it needs to do creative / philosphical thinking (eg, what is the underlying worldview to the film 'Transformers'; or, why does its dicussion of vitamin B1 RDA blatantly contradict the recommended dose per supplement tabs by an order of magnitude...?). And it clearly easily gets lost in extra detail when you try to make a question more precise and nuanced, you have to feed it the basic question, then ask the refining supplementals as bluntly as possible. It also seems to get better and then get worse over time, very mystifyingly...

  • @DrinkyMcBeer

    @DrinkyMcBeer

    2 ай бұрын

    ​​@@MyMy-tv7fdthat's because there is no actual intelligence. It's just a very sophisticated auto-complete that has such an expansive dataset to draw from it creates the illusion of intelligence. The fact that the entire accumulated knowledge base of our entire species is needed to make something that can occassionally answer questions in a way that makes people think it's intelligent should be the telling part. No human requires us to be inundated with the sum total works of all of mankind to carry on a conversation.

  • @elinope4745

    @elinope4745

    2 ай бұрын

    bUT OUr AI wiLL HAve biAS If wE dON'T MAke iT biASEd!!!

  • @rodrigoserafim8834

    @rodrigoserafim8834

    2 ай бұрын

    To be pedantic, the programming is usually unbiased. Its all the guardrails and "safety" features that add the bias back in. The original networks weren't biased at all, they were a true reflection of a large scale sampling of the internet. But nowadays, even statistics can be racist.

  • @heitord5539

    @heitord5539

    2 ай бұрын

    Nice

  • @smartpowerelectronics8779
    @smartpowerelectronics87792 ай бұрын

    The match of the video clips with the text "training of the model" and "it's regular use" is just brilliant 😂

  • @whatwherethere
    @whatwherethere2 ай бұрын

    The Concord was really fast. And it really made you think air travel was going to takeoff however, sooner or later the cost catch you in the end.

  • @garethrobinson2275

    @garethrobinson2275

    2 ай бұрын

    It was complicated, though. For example, the sonic boom was banned by the US over their land, which closed much of the potential market.

  • @daydays12

    @daydays12

    2 ай бұрын

    good one!

  • @adriang6424
    @adriang64242 ай бұрын

    Now that phone call ending gave me a big laugh, we both have the same sense of humour it seems.

  • @dp055

    @dp055

    2 ай бұрын

    6:38 😂

  • @Razmoudah

    @Razmoudah

    2 ай бұрын

    Yeah, that was the best phone call yet.

  • @tonysheerness2427

    @tonysheerness2427

    2 ай бұрын

    Yes they are tearing up the PSTN network so you can not opt out.

  • @Razmoudah

    @Razmoudah

    2 ай бұрын

    @@tonysheerness2427 ????????? Are you sure you replied to the correct comment?

  • @tonysheerness2427

    @tonysheerness2427

    2 ай бұрын

    @@RazmoudahYes she had an old style telephone you know the old pstn ones with a cable that went into the wall and was on old copper wiring.

  • @alieninmybeverage
    @alieninmybeverage2 ай бұрын

    I love the rationalizations on offer by Altman and others whenever asked about the energy problem. The answer is: maybe it will push us to finally make fusion work!! Translation: ADAPT OR DIE, subtext: OR BOTH

  • @mikespangler98

    @mikespangler98

    2 ай бұрын

    If Altman thinks he can do better on fusion he's welcome to try.

  • @DR_1_1

    @DR_1_1

    2 ай бұрын

    Or just stop wasting energy.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    You will be assimilated.

  • @JO-ui9fl

    @JO-ui9fl

    2 ай бұрын

    Maybe the AI will come up with the answer 😂

  • @Syphirioth

    @Syphirioth

    2 ай бұрын

    I think his idea of having small nuclear reactors is nice in one way. But if we gonna think in apocalyptic terminator ways it means we cannot shut it down easily lol. But yes it might boost us into better ways of energy generation instead of ruining landscape with windturbines. causing turbulence where there was alot less turbulence in the past. Creating problems for fytoplankton in north sea and such. Porbably affecting weather patterns to. Having less relfective surfaces cause less solar panels that was grass or forest before absorbing light and just keep them on roofs and walls. Maybe even over roads etc to reduce urban heat.

  • @danielmcwhirter
    @danielmcwhirter2 ай бұрын

    I read a premise that computing alone will require as much energy by 2050 as the whole world now uses and that the ideal location is in outer space...24 hour sunlight and other space base benefits...although, outer space can be highly energetic with particles that erode materials and penetrating radiation that could produce product defects.

  • @danielmcwhirter

    @danielmcwhirter

    2 ай бұрын

    Move over JWST, we want in that neutral gravity spot also!

  • @SabineHossenfelder

    @SabineHossenfelder

    2 ай бұрын

    Yes, well, minus the problem of how to get the data from there to us I guess

  • @scaffus

    @scaffus

    2 ай бұрын

    a 5 000% increase? Or am I off the rails, it seems ridiculous

  • @abj136

    @abj136

    2 ай бұрын

    Double every 18 months is the standard, so not ridiculous.@@scaffus

  • @ShonMardani

    @ShonMardani

    2 ай бұрын

    Just like everything else they slaved the students to build the AI from stolen private data. For profit companies had to pay for the labour and could not use stolen data.

  • @bastiangugu4083
    @bastiangugu40832 ай бұрын

    You are basically right. I read estimates that predict that datacenters are to consume about 30% of the energy worldwide in a few years. Some think that we can't build power plants fast enough to keep up. Any kind of power plant, mark you. On the other hand, there are many companies working on solutions for edge computing and much more efficient chips for inference workloads. Memory will be a constraining factor here, if the models are getting bigger like they did in recent times. But there's also work done on this. Increasing the sparsity of models is a very active field of research.

  • @heitord5539

    @heitord5539

    2 ай бұрын

    Fuck! 30%? Oh, man, thats gonna get quite a serius problem!

  • @asdfasdfasdf1218

    @asdfasdfasdf1218

    2 ай бұрын

    I don't believe AI will be even a small fraction of the datacenter energy use though. Most will just be powering websites like Google and KZread. AI usage is not nearly high enough to make a dent.

  • @tomholroyd7519

    @tomholroyd7519

    2 ай бұрын

    By the time that happens we will have increased capacity by 30%. Just by replacing the electrical transmission cables with more efficient ones. Sabine can do a video on that sometime; and the computers will continue to become smaller and more efficient.

  • @asdfasdfasdf1218

    @asdfasdfasdf1218

    2 ай бұрын

    @@tomholroyd7519 Also not to mention, somehow a mouse's brain is able to train a neural network more capable of current robots without using extreme amounts of energy. So energy-efficient AI is definitely possible.

  • @gelmir7322

    @gelmir7322

    2 ай бұрын

    Weren't there certain locations on earth whose solar energy exposure would be enough to power the entire human civiliation or something? maybe they can build the facilities there instead.

  • @mikespangler98
    @mikespangler982 ай бұрын

    Don't forget they will have to retrain these models on a regular basis too. How regular? We don't know that yet.

  • @pirobot668beta

    @pirobot668beta

    2 ай бұрын

    Models that learn/adapt will take care of their own updates. There is at least two systems that can evaluate their own decisions and 're-write' parts of their models. The downside? Slow processing and very specialized models. 'Learn as you go' is how humans do things...AI is playing catch-up!

  • @mariovicente

    @mariovicente

    2 ай бұрын

    Well said. Easy to "forget" that part.

  • @Leonhart_93

    @Leonhart_93

    2 ай бұрын

    @@pirobot668beta As far as I know they have no LLMs that learn over time with user input. Right now the only thing they remember is the current conversation, aka the current context. Data can't really be added to a complex model like you do additions. It has to the recomputed into the network.

  • @bogdyee

    @bogdyee

    2 ай бұрын

    ​@@pirobot668betaThey are unsupervised up to a level but still require manual fine tuning and they also do not learn as they go.

  • @BooleanDisorder

    @BooleanDisorder

    2 ай бұрын

    Not agents. That's the whole point of an agent - they can simply look up facts instead of relying on the parameters, and stay up to date the way we do. Stop looking at [current time] and look at the trends etc.

  • @utkua
    @utkua2 ай бұрын

    At least using humans as batteries doesn't seem to worth it.

  • @AnnNunnally
    @AnnNunnally2 ай бұрын

    Perhaps we should invest more time and money in how to generate more energy.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    @@rjohnm666 It is quite the reverse, quite the reverse.

  • @nathanbanks2354

    @nathanbanks2354

    2 ай бұрын

    Plus we can ask the AI to design new power plants. Maybe it can make a better stellarator.

  • @maritaschweizer1117

    @maritaschweizer1117

    2 ай бұрын

    Totally agree, it is naive to think AI could do reseach. We need better computers to bring research on energy ahead.

  • @anonimous_user7318
    @anonimous_user73182 ай бұрын

    Increasing the efficiency of AI computations may only increase the demand for AI and lead to increased overall energy use. See Jevons' paradox.

  • @johelsen5776

    @johelsen5776

    2 ай бұрын

    This isn't even a question. OF COURSE that's how it will go. Like what building extra highway lanes does to traffic: you just attract MORE, and after a while the problem has become even more intractable.

  • @anonimous_user7318

    @anonimous_user7318

    2 ай бұрын

    @@johelsen5776 The exact effect on total energy use depends on the elasticity of demand for AI computations. But yes, Jevons' Paradox is important when considering the effects efficiency gains.

  • @donaldhobson8873

    @donaldhobson8873

    2 ай бұрын

    At some point the AI kills all humans, and then builds a dyson sphere. At this point human demand for AI is 0, but it uses Loads of energy.

  • @davidbonn8740
    @davidbonn87402 ай бұрын

    I think it is super-important to keep in mind that large neural networks are obviously an intermediate step on the trip to a more generalized AI. And extrapolating from that current technology to what the future might hold is probably pretty risky. Keep in mind that biological systems, not just mammal brains, seem to be orders of magnitude more efficient learners than neural networks with orders of magnitude less power consumption. We have so much to learn and discover.

  • @levilukeskytrekker

    @levilukeskytrekker

    2 ай бұрын

    It's interesting, because biological systems have a lot less in common with mechanical ones than is popularly thought-"neural networks" and actual neurology have very little in common in reality (e.g. there are multiple species of neurons, and their primary interaction is via chemical reaction, which makes them more akin to an ecosystem than a computer), but biologists can barely use Excel, and software engineers don't know where their own galbladders are, so both disciplines borrow liberally and obliviously from each other's terminology. We've already been able to create "artificial" life by reprogramming frog stem cells-they call them xenobots-but they promptly began to evolve in the petri dish, inventing a way to reproduce they were not engineered to have (they weren't supposed to be able to reproduce at all, they just creatively developed a means). Any "biological computation system" would likely deviate rapidly from any specified parameters, if it was not already conscious and self-determined to begin with (keep in mind something as small as a T cell is conscious enough to dynamically and non-deterministically pursue bacteria and make the necessary decisions to do so).

  • @caty863

    @caty863

    2 ай бұрын

    @@levilukeskytrekker Your use of the adjective "conscious" here is very generous. Also, a system can be very complex for us to decipher (today) but that doesn't make it non-deterministic.

  • @DEBO5

    @DEBO5

    2 ай бұрын

    Read “Principles of Synthetic Intelligence” by Joscha Bach

  • @user-jq6ro5rt6s

    @user-jq6ro5rt6s

    2 ай бұрын

    "we" do not have much more to learn and discover. as soon as a generalized ai comes (within 1 or 2 years now) mass amounts of unemployment will come. and once hardware catches up expect even more. within a decade probably 99% of people who aren't billionaires are going to be unemployed and either dependent on the government (if we can haggle for a ubi), or , much more likely, starving/dead. the billionaires definitely have a lot to learn though

  • @whome9842

    @whome9842

    2 ай бұрын

    @@user-jq6ro5rt6sIf nobody have income money no longer have value. Nobody can buy products because they don't have money and nobody can sell because nobody have money to buy.

  • @rorag111
    @rorag1112 ай бұрын

    Another aspect is water to cool the chips. I only had a quick search, but it appears to take 10-100 mL to cool the heat from a text prompt. That water can be reused eventually, but with how many prompts are handled continuously that is an enormous amount of base water needed, and staggering amounts of heat released into the environment.

  • @Sergeeeek

    @Sergeeeek

    2 ай бұрын

    Are they using water cooling in data centers? It's a maintenance nightmare.

  • @daydays12

    @daydays12

    2 ай бұрын

    My brain , which can multitask many more things than AI, needs no cooling!

  • @OgdenM

    @OgdenM

    2 ай бұрын

    Hrm, seems me that we should harness the heat for energy production. Might not produce much but it would atleast produce some AND cool off the water faster.

  • @ferb1131

    @ferb1131

    2 ай бұрын

    ​@@daydays12 That's not really true. Your brain is liquid-cooled by your blood, and if you get hyperthermia you'll become delirious because your brain can't function properly.

  • @thulyblu5486

    @thulyblu5486

    2 ай бұрын

    Just combine cooling the chips with the heating system of the building (and heating of neighboring houses if there's heat left over), problem solved. Everybody can take warm showers then. Dual use!

  • @mrschneideriii
    @mrschneideriii2 ай бұрын

    Smaller LLMs have been outperforming larger ones. There are a variety of reasons for this. Small models are much cheaper to train and operate. It’s not an LLM, but I’m running Stable Diffusion on my laptop and spending about 40€ a month in electricity (in Germany) for a work load that would run me 10X as much with a Midjourney subscription. The 4090gpu paid for itself in half a year by that measure. Even if I were to run Stable Diffusion in the cloud, I’m cutting my cost in half running it locally. LLMs are starting to perform well at near Stable Diffusion sizes with proper fine-tuning, and a Macbook can run a medium sized LLM. The AI assistant is around the corner and its energy cost will be cheaper than current subscription models. The M2 Macbooks are very efficient.

  • @ronnetgrazer362

    @ronnetgrazer362

    2 ай бұрын

    I expect train-as-you-go to become a thing even (or especially?) for open source in the coming years, so a small business could expand the trainee's workload as it takes on more tasks, and simply buy more compute, whether cloud or locally run, as the deployment requires. Great strides have been made in terms of efficiency, as you point out. So I don't see the problems Sabine is talking about. This technology is open enough to allow for plenty of competition, and any company that becomes too big will lose out to the fast movers in this changing landscape, *and the rate of change will keep increasing, forever keeping total market domination out of reach.*

  • @vitmartobby5644

    @vitmartobby5644

    2 ай бұрын

    ​@@ronnetgrazer362 There is also another hope, Mamba architecture. With a linear complexity on training and constant on inference, it would vastly decrease energy costs while also increasing our models quality

  • @ronnetgrazer362

    @ronnetgrazer362

    2 ай бұрын

    @@vitmartobby5644 Very promising tech, let's hope it scales well. Also, there's that ternary thing to give the current setups a boost in the short term. So many papers, I can't keep up. As for longer term developments, things like thermodynamic computing, molecular computing, photonics, I mean at least one of 'em is bound to work with AI doing AI research. Future's lookin' bright!

  • @jeremiasteliasperez4507

    @jeremiasteliasperez4507

    2 ай бұрын

    ​@@ronnetgrazer362Hard to share your optimism when big companies usually wait for small ones to bear the risks of R&D, only to finally proceed to buy the few successful ones. Deep pockets yield an extraordinary amount of market power at the end of the day.

  • @chookbuffy

    @chookbuffy

    2 ай бұрын

    too much hope. We are likely not to have enough net energy gain from the transition to renewables to power our civilisation (see Net Energy, or EROI discussions etc) and you will see that its a pipe dream and will take us all to a cliff @@ronnetgrazer362

  • @VFella
    @VFella2 ай бұрын

    @Sabine We are training a LLM here at our premises for our own use and one of the most common uses of our supercomputer is trainign AI models. We are also involved in measuring the energy consumption of processes (software, not only hardware) and working on several optimization projects. So we may soon have some hard numbers for that. Actually, SLURM, the scheduler, can provide the amount of KW / Joules used per computing job run.

  • @randy9664
    @randy96642 ай бұрын

    Always enjoy your video’s and your humor. Thanks for info and chuckles

  • @BANKO007
    @BANKO0072 ай бұрын

    Slight error at 4:07. 400TWh is more like the electricity consumption of the UK, not the whole of the energy consumption, which is far larger.

  • @danielmcwhirter
    @danielmcwhirter2 ай бұрын

    NVidea showed off their newest mega-chips and systems for AGI development and use. $10 billion cost from concept to commercial prototype. I think they hinted $millions to be the chips price. Oh! But the new megachip executes 4,000 times more and faster than their current best chip, while using the same energy.

  • @afterthesmash

    @afterthesmash

    2 ай бұрын

    "The new Blackwell chip, unveiled Monday, will cost anywhere between $30,000 and $40,000, Jensen Huang said on Tuesday. The chip has 208 billion transistors, 128 billion more than its Hopper predecessor, is capable of five times the AI performance, and can help reduce AI inference costs and energy consumption by up to 25 times." Based on my past experience with this kind of press release, most likely, a typical speed-up is in the range of 5x. Comparing what little was known at the time of the first announcement, it goes from 700 W to 1000 W TDP with roughly twice as many transistors and twice as much bandwidth. At first blush, that's a 2x performance gain for a 1.4x power increase. One of these to replace two of the older chips saves you about 400 W at the TDP margin. (400 W) * (1 year) * (10 cents per kWh)) = $350.63/year It'll take a long while to pay off the marginal cost at that savings rate. Bandwidth is the most robust substrate for final performance across the high-performance chip market, but there are special markets where other architectural parameters dominate. You can sometimes also see extra factors of 2 or 4 or 8.

  • @richdobbs6595
    @richdobbs65952 ай бұрын

    How much value will AI generate if the inputs are garbage, and if the outputs are turned to garbage in the name of "safety"?

  • @DR_1_1

    @DR_1_1

    2 ай бұрын

    It's all about the narrative.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    Already happened, it is worthless online.

  • @aaronjennings8385

    @aaronjennings8385

    2 ай бұрын

    So true

  • @TheBagOfHolding

    @TheBagOfHolding

    2 ай бұрын

    Chat gpt thinks I'm racist and doesn't answer my questions just gives me excuses.

  • @christiandarkin
    @christiandarkin2 ай бұрын

    i'd have to say that 1300 mwh to train something as world-changing as gpt3 does not sound like a lot. even 10x or 100x that sounds like a bargain energy-wise considering the potential for saving energy delivered by all the tasks that would otherwise be being done in other ways

  • @DR_1_1

    @DR_1_1

    2 ай бұрын

    How is that world changing. And I know what it can do, officially.

  • @argon7624

    @argon7624

    2 ай бұрын

    It's not world changing, it's just fancy word prediction. Not to mention that its output is just slight variations on the complete average of what a human would put in a situation, something true for all neural network learning models. It can't think, it can't make anything new, it can't come up with ideas or say anything that hasn't been said before. It's just a really, really, really expensive toy that people use instead of paying workers to make a good product. This isn't to mention that it actively gets worse when the training data is compromised with neural network generated content.

  • @daydays12

    @daydays12

    2 ай бұрын

    gpt3 world changing?? Explain!!!

  • @kerbaman5125
    @kerbaman51252 ай бұрын

    The "issue" you stated will only encourage more research into smaller and easier to train models, which are already outcompeting larger models on a cost to performance basis, but when fine tuned (much more expensive on larger models), they are simply better.

  • @pirobot668beta

    @pirobot668beta

    2 ай бұрын

    So folks should use a #3 Philips screw-driver to drive #3 Philips screws, and not a big hammer? Large models try (and often fail) to do everything under the Sun. I am a huge fan of model-merging...the results are kind of dream-like, but then again I use image generation to escape from the real world...

  • @seeibe

    @seeibe

    2 ай бұрын

    [citation needed] I do hope for smaller useful models down the line, but currently I simply can't use small models for tasks like coding. GPT-4 has some capabilities I'd call "emergent" which the smaller models still lack.

  • @wiktorzdrojewski890

    @wiktorzdrojewski890

    2 ай бұрын

    hey, can you provide examples? thanks

  • @dw620

    @dw620

    2 ай бұрын

    @@seeibe 100%. Emergent properties of large models are where the payback is likely to be. (For whatever definition of "payback" humanity receives.)

  • @osamabinladen6070

    @osamabinladen6070

    2 ай бұрын

    KSP ALL LIFE MAN

  • @edwardlulofs444
    @edwardlulofs4442 ай бұрын

    Good video, thank you. Very few, if anyone could understand and explain this important aspect of AI.

  • @Istandby666
    @Istandby66615 күн бұрын

    Love the phone call bit. They always crack me up.

  • @bernardfinucane2061
    @bernardfinucane20612 ай бұрын

    This is connected to some remarks you made about quantum computing: You said QC was simulations, not general computing -- that is, not Turing complete. There is some truth to that. Interestingly, something similar might be happening with neural networks. A new generation of chips can only do neural networks. They could be seen as simulators, not Turing complete computers, but would be much less energy hungry. There is even talk of building hardware specifically for individual models, which would be even more efficient and even more like a simulation. About all that energy, it gets exported as heat and could be sold for district heating etc. We waste so much energy that is is a bit silly to worry about where to get more.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    Yes, we need less heat.

  • @nexussays

    @nexussays

    2 ай бұрын

    Neural networks are Turing complete. You can make an AND gate with three perceptrons. That they are Turing-complete is a core reason *why* neural networks have been so promising for so long.

  • @absalomdraconis

    @absalomdraconis

    2 ай бұрын

    ​@@nexussays : Can they compute the NOT operation? Without that, they aren't Turing Complete.

  • @yarno8086

    @yarno8086

    2 ай бұрын

    ​@@absalomdraconiswell isnt a simple weight of -1 a not sign

  • @user-uf4rx5ih3v

    @user-uf4rx5ih3v

    2 ай бұрын

    @@nexussays Neural networks are Turing complete (IE you can find a network to simulate any other Turing machine). In practice that's basically irrelevant because there is no obvious way to model most computations in this way. The game minesweeper is Turing complete in a sense but using a minesweeper model for computing is basically pointless.

  • @itzhexen0
    @itzhexen02 ай бұрын

    the issue is that openai isn't the only ai company. that's energy for every one of these companies ai's. not just chatgpt.

  • @garrytuohy9267
    @garrytuohy92672 ай бұрын

    I wonder how much energy will be consumed by TinyML running on ever Microcontroller. Individually it cannot be so much, but combined across the majority of Microcontrollers it could add up.

  • @PiyushGupta-vx6qi

    @PiyushGupta-vx6qi

    2 ай бұрын

    Basically you mean IoT

  • @davidpetersonharvey
    @davidpetersonharvey2 ай бұрын

    That's a good discussion of the real problems. I never looked at power consumption but the whole reasoning model and size of the linked lists in the neural net seemed really inefficient to me. Thanks for the good video.

  • @icaleinns6233
    @icaleinns62332 ай бұрын

    Ok, THAT Red Phone call was absolutely hilarious!

  • @fhdhcjshf1288
    @fhdhcjshf12882 ай бұрын

    Thanks to Brilliant org for sponsoring a lot of scientific videos on KZread (Electro boom fan, here)😄

  • @lloydster2000
    @lloydster20002 ай бұрын

    I am confused. I thought that we agreed on this channel that LLMs like ChatGPT etc., were NOT really AI at all. My personal experience with ChatGPT is that it can barely stop contradicting itself within a chat session, and spends most of its time "apologizing for the misunderstanding". It certainly doesn't seem like something that is worth powering with SMRs.😅 At least not yet...🤔

  • @mctrivia

    @mctrivia

    2 ай бұрын

    What are you using it for, and are you using chat GPT 4? Chat GPT 4 is far from perfect, but it is smart enough in programming that I can now get a weeks work done in a day. The important thing is to give it relatively small tasks and have a way to check it is correct.

  • @lloydster2000

    @lloydster2000

    2 ай бұрын

    Hi there@@mctrivia! 🙂 It is ChatGPT4, and I agree entirely that it is widely reported to be good at certain tasks (like writing computer code). I don't doubt this. I personally, am not really using it for anything at all. I am just trying it out for fun. I have been experimenting with the question of whether I think it can really be considered to be AI. And by that I mean "Is it plausibly a step along the path to AGI?". I have been asking it questions that I believe a human would be able to answer, and seeing what results it produces. I push it to clarify its replies. I ask it to provide references, etc. In my experience it performs remarkably badly at these relatively simple tasks.

  • @jrjrjrjrjrjrjr

    @jrjrjrjrjrjrjr

    2 ай бұрын

    @@lloydster2000That's an understatement. It's a desaster. Most answers are simply wrong, often even "lies", i.e ad hoc inventions of non-sense. The desaster is that people do not notice that and use those wrong answers in everyday life and even publish them in other texts. This will result in a self-reinforcing circle of misinformation.

  • @whatsupbudbud

    @whatsupbudbud

    2 ай бұрын

    Indeed, LLM's are glorified word generators. Just because "it makes sense" that some word is going to fit next after the previous one(s) does not mean it is the right fit. Anything resembling intelligence could instantly grasp this, alas LLM cannot because they are not intelligent in the slightest. As always, FOMO is leading the pack.

  • @ferb1131

    @ferb1131

    2 ай бұрын

    @@lloydster2000 Then I think you're just misunderstanding the term AI. AI is a term that has been used for decades to apply to all sorts of things like pathfinding algorithms and equation solvers - basically any time a computer created the false appearance of doing something more intelligent than processing numbers. Everybody always knew that those things couldn't do most things a human could do, weren't remotely close to AGI, but nobody ever for that reason claimed that they couldn't "really be considered to be AI'.

  • @cheesium238
    @cheesium2382 ай бұрын

    Also there is the cooling of all those centers guzzling electricity like there's no tomorrow. The amount of water that has to be diverted to them is astonishing.

  • @jamesmac357
    @jamesmac3572 ай бұрын

    Intel, IBM, and NVIDIA are working to integrate more AI computations on chips, and clusters of chips, in order to cut energy costs. One advantage of putting everything on one chip is the lack of extra wiring and its resistance, thus the CPU works faster: Meaning, one strategy is to place everything closer together, including for super computers, where there are racks of CPU chips working closely to each other. I have a new generation PC, and the Intel CPU has AI for speech on the chip. The next step is AI graphics, and NVIDIA is working on that.

  • @ericlehman839
    @ericlehman8392 ай бұрын

    Reasoning about the long term (say, > 2 years :-) ) based on current ML practices might be suspect... In particular, models may stop being trained from scratch at extremely high cost; rather, we may move to a continuous-update approach. Also, today's sharp distinction between training and inference may dissolve, at least for some applications. Notably, biological brains continuously adapt, so this is certainly possible in principle. An offbeat way to think about AI power consumption is to assume that human brains are very, very power efficient. Humanity runs about 25 watts x 8 billion watts / person = 200 gigawatts worth of thinking power. If we want that amount of artificial thinking power, we might need a multiple of the power. Not saying this is an especially precise method, but fun to think about!

  • @AndroidFerret
    @AndroidFerret2 ай бұрын

    Isn't that awesome? For this little of our non rechargeable resources we can create awesome 7 fingers furry corn and have a dementia riddled, hallucinating Data clone who likes to play being someone else when you ask the right things.

  • @Leonhart_93

    @Leonhart_93

    2 ай бұрын

    That's why it's unsustainable. Right now it technically generates money because people throw money at it to see it do cool random stuff. They will get bored eventually.

  • @sunbeam9222

    @sunbeam9222

    2 ай бұрын

    Everything gets its origins from randomness first. Human then assemble the pieces and claim ownership. That's nice that it finally caught up with their brain but discovery always starts from random and weird and impossible.

  • @daydays12

    @daydays12

    2 ай бұрын

    nice one 🙂

  • @derekgarvin6449
    @derekgarvin64492 ай бұрын

    Great reporting! Where science meets the business case, your reporting does the public a huge service. Hopefully the business community picks up on this and broadens your viewership

  • @saisuapalli
    @saisuapalli2 ай бұрын

    There were some promising technologies on non-digital Tensor GPUs. I only heard about them from a lab researching them so take my words with caution, but the idea was to, instead of using digital representations of floats, use voltage as the *approximated* floats and operate on that. What I heard (and it made sense to me) was that it reduced the power consumption by a lot, because now you don't need a minimum voltage per bit, and the smaller the values computed with, the more energy efficient it was (I expect that LLMs actually have a lot of small values due to their versatility)

  • @rolty1
    @rolty12 ай бұрын

    Brilliant as usual! Love it ❤

  • @user-eb1zv6sr9e
    @user-eb1zv6sr9e2 ай бұрын

    The phone call at the end 💗

  • @stuartspence9921
    @stuartspence99212 ай бұрын

    We do have concrete numbers. The model card for Llama2 talks about energy consumption and carbon emissions.

  • @yqisq6966
    @yqisq69662 ай бұрын

    6:04 I like how you hesitated when trying to spell the word "compute" lol.

  • @liquidKi
    @liquidKi2 ай бұрын

    3:30 Sabine, I think it's important to test these things for yourself. An iPhone 12 has a battery capacity of about 14 watt hours. A Macbook Air M2 has a capacity of 52 watt hours. I can create images on this laptop using Stable Diffusion, and I can create about 4 images per 1% of battery, thus each image consumes only 0.125 watt hours, so I could create over 100 images in the charge of a smartphone, 1/100th of what you claimed.

  • @sudpud

    @sudpud

    2 ай бұрын

    What resolution? Doesn’t the time and energy scale quite a bit with larger resolutions? Even so, i think your macbook is one of the most efficient generators. My 3090ti is a lot faster but burns far more energy.

  • @robo5013

    @robo5013

    2 ай бұрын

    Do you have Stable Diffusion on the computer or are you using it online?

  • @v1kt0u5

    @v1kt0u5

    2 ай бұрын

    @@robo5013 He meant locally.

  • @GizzyDillespee

    @GizzyDillespee

    2 ай бұрын

    Each of the opposing sides seems to cherry pick their statistics from the opposite extremes, and they each can justify their decisions to do so. So, the rest of us have to take the statistics as boundaries between what happens if the best or the worst of our instincts take over... and yet, it's also possible that the actual amount of energy demanded will fall outside of those boundaries at some point, for whatever reason. So, we don't really have any idea... but it's safe to assume there aren't intelligent plans in place to supply that energy in the cleanest way possible.

  • @Leonhart_93

    @Leonhart_93

    2 ай бұрын

    The energy efficiency of a GPU per compute varies a lot depending on the model. Usually the most powerful GPUs aren't necessarely the most energy efficient per compute.

  • @UNr34
    @UNr342 ай бұрын

    I love how Sabine went from tossing AI as a gimmick, to warning people about the impending apocalypse😨

  • @archimedesbird3439

    @archimedesbird3439

    2 ай бұрын

    It's a deadly gimmick

  • @andrewhooper7603

    @andrewhooper7603

    2 ай бұрын

    I'm not worried about AI as much as I'm worried about the clueless executives who think it can do anything. Wasn't there a company who liquidated their entire customer service department and replaced it with essentially an interactive FAQ?

  • @daydays12

    @daydays12

    2 ай бұрын

    She didn't warn about an apocalypse .That must have been in your imagination..or may be you are a bot.

  • @ArcanePath360
    @ArcanePath3602 ай бұрын

    The highlighted limits of AI actually give me hope. Thanks

  • @Thomas-gk42
    @Thomas-gk422 ай бұрын

    Thank you, have a nice day.

  • @wpelfeta
    @wpelfeta2 ай бұрын

    There's definitely a lot of room for improvement in the energy efficiency of AI. The brain is orders of magnitude more efficient, so there is precedent that it is possible to improve efficiency.

  • @williamforsyth6667

    @williamforsyth6667

    2 ай бұрын

    "The brain is orders of magnitude more efficient" Really? May brain requires about 20mWh/LLM prompt result, while the video said LLMs need a few mWh/prompt. (I was generous and assumed that I can produce 1000 outputs per hour.)

  • @tedmoss

    @tedmoss

    2 ай бұрын

    The brain is not orders of magnitude more efficient, they are doing different things, you are comparing apples to oranges. The brain is a biological computer and the AI is run on a silicon computer, there are many ways to get AI more efficient and few ways to do that with our brains.

  • @zhoulingyu

    @zhoulingyu

    2 ай бұрын

    Yup, you just found an entrance to the next generation job market.

  • @absalomdraconis

    @absalomdraconis

    2 ай бұрын

    ​@@williamforsyth6667 : mW/prompt is a different unit than mWh/prompt. That "h" is actually important.

  • @williamforsyth6667

    @williamforsyth6667

    2 ай бұрын

    @@absalomdraconis Thanks, fixed.

  • @user-uj9cc5ch5p
    @user-uj9cc5ch5p2 ай бұрын

    Xcellent video Sabine. I want to learn all I can about AI. Mr. X

  • @Charvak-Atheist
    @Charvak-Atheist2 ай бұрын

    If we could control the resistivity of a wire digitally, then the energy use will reduce drastically. (Neuromorphic chip ) Its basically doing matrix multiplication. Controlling the resistivity can act as controlling the weights. And at nodes we can give additional charge which will act as Biases.

  • @fontende

    @fontende

    2 ай бұрын

    or you can sanction russia, so they'll supply oil almost free to china, genius (that scheme can be only by Ai because russians not so dumb to decline profit)

  • @halulife35
    @halulife352 ай бұрын

    i have missed that telephone so much

  • @douglaswilkinson5700

    @douglaswilkinson5700

    2 ай бұрын

    She's trying to follow the TicTok model of shorter videos which are being favored by KZread.

  • @ronnetgrazer362

    @ronnetgrazer362

    2 ай бұрын

    @@douglaswilkinson5700 And me! I've got another 12 videos queued up, who's got time for 20 minute weeklies?

  • @daydays12

    @daydays12

    2 ай бұрын

    lovely piece of design wasn't it?

  • @JamanWerSonst
    @JamanWerSonst2 ай бұрын

    AI industry should team up with the renewable energy and grid industry. Energy is only really a problem as long as it is a limited good and sourced through fossil fuels.

  • @tnekkc
    @tnekkc2 ай бұрын

    I can make up stories for grandkids. My 3 year old granddaughter comments on Whiley coyote's plan, "That won't work."

  • @akagordon
    @akagordon2 ай бұрын

    OpenAI is technically still a nonprofit, though it is a nonprofit that owns and operates a for profit company. This novel setup is what enabled them to gain so much traction with investors while, in theory, prioritizes ethics in the development of AI. This didn't pan out like expected and in fact drove a lot of the drama last year around Sam Altman's firing. Basically, the for profit arm develops the models and can profit from their use, while the nonprofit arm owns and dictates the license of those models. In this way, the OpenAI board has no legal liability to shareholders. Of course, we saw how that played out in real life.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    A non-profit cannot own a for profit company and be called a non-profit.

  • @tw8464

    @tw8464

    2 ай бұрын

    ​@@tedmossexactly

  • @tw8464

    @tw8464

    2 ай бұрын

    ​@@tedmoss"OpenAI" is NOT "nonprofit" and never was.

  • @Hayreddin

    @Hayreddin

    2 ай бұрын

    ​@@tw8464 Use the wayback machine: Introducing OpenAI by Greg Brockman, Ilya Sutskever, and the OpenAI team December 11, 2015 OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

  • @Leonhart_93

    @Leonhart_93

    2 ай бұрын

    Law technicalities don't change a reality that everyone knows.

  • @robmacl7
    @robmacl72 ай бұрын

    Those billions per year costs are going to be mainly building and depreciating datacenters, not power. There is a shortage of compute which is especially driving up costs at the moment. The energy cost of using AI is dropping rapidly due to algorithm improvements, not just hardware. For image generation this is 10x in just a year. My impression is that less effort has been put into optimizing the use compute costs of the leading edge AIs because of the emphasis on rushing out the new best model. Inference cost has not been limiting, at has not gotten the same level of attention.

  • @PatrickCurl

    @PatrickCurl

    2 ай бұрын

    I have a feeling mixture of experts becomes more like family tree if experts.... or network graph.... think of all the topics in Wikipedia and if you had 1 llm per major topic, but you'd need glue llms to decide which expert to ask and to maybe merge responses from multiple experts for different parts of the question... but a more targeted llm costs a fk ton less than training a super duper all Knowing one, plus inference time drops by a Factor of....a lot I'd imagine...... that and I think we're going to figure out better rag like systems to give ai a memory that's more comparable to a humans.

  • @cajampa

    @cajampa

    2 ай бұрын

    @@PatrickCurl Yes, that plus when we have them work more as an agent than pure generative 0-shot models. They can be allowed to reason over there own output and refine it by taking in more data as needed to get things right. It make smaller faster but more limited models way stronger. That is the next close in time big thing and step up, I believe comes from this advancement. Because it does not need better models at the base, only better handling of their output and input (like be allowed to browse the Web to gather data for us) that they are allowed to do with the output before it is presented to us.

  • @greenftechn

    @greenftechn

    2 ай бұрын

    you wrote, "Those billions per year costs are going to be mainly building and depreciating datacenters, not power." Because they work in the dark (without power) Not! This is why Bill Gates wants SMRs. This is why Amazon is building a datacenter near Berwick, PA. They'll be training and retraining for decades, chasing their tails.

  • @cajampa

    @cajampa

    2 ай бұрын

    @@greenftechn No dude, it is not about training and retraining. It is about doing work with the finished models for hire that will take over more and more jobs. The training is just a part of it. You will watch what happens when they let lose AI agent on their own doing stuff for us and other agents. I have lots of plans for want I want to use them for. And just a model like Sora will take a crazy amount of compute to use and that will change how we do movies and other video art. So much fun stuff I can not wait.

  • @gavinlew8273
    @gavinlew82732 ай бұрын

    Thank you for the information, I never knew generating one image would consume so much electricity!

  • @bobmester3475
    @bobmester34752 ай бұрын

    Best humor yet Sabine..

  • @eonasjohn
    @eonasjohn2 ай бұрын

    Thank you for the video

  • @seeibe
    @seeibe2 ай бұрын

    3:50 Charging your smartphone seems like a weird comparison, though? Smartphones are insanely energy efficient. On a regular workday my computer consumes around 1-3kWh. I feel like that's a much better baseline of comparison for a task that is similar to other things you may do on your computer.

  • @tonylikesphysics2534
    @tonylikesphysics25342 ай бұрын

    Wow. A picture really is worth a thousand words.

  • @klutterkicker

    @klutterkicker

    2 ай бұрын

    * Shia LaBeouf doing a slow clap *

  • @MathSMR42
    @MathSMR422 ай бұрын

    So in conclusion, AI wont change anything.

  • @harikumarv4658
    @harikumarv46582 ай бұрын

    0:32 This is the funniest satirical visual analogy I've ever seen. Can't stop tripping over it!

  • @pabloivan81
    @pabloivan81Ай бұрын

    Thanks for the video!!

  • @mattslaboratory5996
    @mattslaboratory59962 ай бұрын

    Even an average joe like me, watching Sabine and other excellent KZreadrs, uses a lot of compute every day!

  • @weibrot6683
    @weibrot66832 ай бұрын

    I can easily disproof the image generation argument, I have an RTX 3070 and usually image generation takes 10 seconds, let's say I crank up the resolution and quality and we're at 2 minutes per image, with a 220Watt TDP that's roughly 7 Watt hours Now let's take a smartphone charge, a common charge speed is around 0-50% charge at 20 Watts in 30 minutes, so charging your phone half way consumes twice the electricity than generating one image (with realistic image gen settings closer to 16x) But even then it doesn't matter since we're talking miniscule numbers here, 7 Watt hours? Keeping my PC and monitor in standby consumes a lot more than that

  • @GeoMeridium

    @GeoMeridium

    2 ай бұрын

    I think the tasks of the future will require more demanding models in order to replace us, but I agree with the premise. I also think renewable energy will have an inherent advantage over polluting forms from an economic standpoint. Since computing power is a lot more important than latency time, you'll only see servers built in the areas where electricity is extremely cheap (much like with Bitcoin mining). While some forms of polluting energy cheaper than renewable energy on average, polluting energy sources tend to have a similar price everywhere around the world, as regulated by OPEC. Meanwhile, with AI servers, you can just pick from whatever place on Earth has the cheapest electricity. When it comes down to price, Quebec Hydro, Icelandic Geothermal, and Saharan solar are probably going to beat out Saudi oil.

  • @TheBagOfHolding

    @TheBagOfHolding

    2 ай бұрын

    She is confusing training models with generating images with the trained models.

  • @sitnamkrad

    @sitnamkrad

    2 ай бұрын

    I'm glad to see I wasn't the only one questioning these numbers. I've installed and played around with both text and image generation on my RTX 4070 TI. I can create a single 1024 * 1024 image in less than 8 seconds. This will use pretty much 100% of my GPU's CUDA cores for that entire time. Not only that, but when doing text generation, I found that it takes about a second per sentence. Still taking up 100% of the CUDA cores during that time. (note that this was with a very small text model because the larger ones don't even fit on the memory of your average gaming GPU). So at least for what's available to the average user installing things locally, text generation seems to be far more demanding. So the image generation numbers definitely seem to be wrong.

  • @daydays12

    @daydays12

    2 ай бұрын

    And so? What's the use if it generates a rubbish ( and stolen) image? My brain ( and hands) can generate an interesting, even original, image with an energy consumption of less than a quarter of a slice of bread and a sip of water. AI not v competitive eh?

  • @daydays12

    @daydays12

    2 ай бұрын

    was this post generated by AI? @@TheBagOfHolding

  • @honahwikeepa2115
    @honahwikeepa21152 ай бұрын

    Thoroughly enjoyed this.

  • @denisekh
    @denisekh2 ай бұрын

    The rendering for a 3D animatic animation and live action visual effects require extreme high speed to generate a 5 seconds animation or animatic videos. And it has to learn, to train from completed made movies, animations, any finished contents in order to deliver layout, the environment, the rigging, and many many more. Thank you for tis explanation. It's great to shut AI - VC quiet. These VC just want to wrap their pregnancy.

  • @w00dyblack
    @w00dyblack2 ай бұрын

    It's fine. ChatGPT will figure out how to do nuclear fusion. Then cure cancer and then walk your dog.

  • @johelsen5776

    @johelsen5776

    2 ай бұрын

    We're kicking more and more "problem-cans" down the road, confabulating fantastic future technologies to solve them "later". There's a non-zero chance that we're in for a serious collective hangover and hard awakening...

  • @cactusman07pim
    @cactusman07pim2 ай бұрын

    Can we compare the energy consumption of a human preforming te same task as a AI?

  • @fontende

    @fontende

    2 ай бұрын

    there's a precise calculations to the exact calories in WW2 camps, you can sum all calories for work

  • @TheBagOfHolding

    @TheBagOfHolding

    2 ай бұрын

    How much milk does a baby need to drink before it can paint a black George Washington?

  • @ChairmanKiel
    @ChairmanKiel2 ай бұрын

    Thank you for the video Sabine. You just assuaged my fears of a skynet like uprising. Basically, the only nefarious thing to worry about now is Ai learning to troll, lie, and censor real people. Everything else is too expensive :)

  • @justice4all719
    @justice4all7192 ай бұрын

    The solution seems to be centralized AGI that controls many robots at once. It can receive data from video or sensors from every individual robot, and give tasks specific to any of the robots. The robots only need enough energy to receive orders from AGI and move, activate motors and pistons, all the power needed to feed the AGI is centralized. It will probably become a business soon enough, companies training centralized AGI, to which a consumer can connect their robots in order to make them intelligent, for a monthly cost. Part of regulations to come will probably limit the rights to mount AGI directly on robots without permits or something.

  • @warrenfrisina5651
    @warrenfrisina56512 ай бұрын

    Perhaps the solution to AI cost is for humans to write, draw and think for themselves again. Less energy consumption and polution as well.

  • @Mrluk245

    @Mrluk245

    2 ай бұрын

    I dont think that this is an option

  • @gruffelo6945

    @gruffelo6945

    2 ай бұрын

    yes please

  • @warrenfrisina5651

    @warrenfrisina5651

    2 ай бұрын

    @@Mrluk245 Why?

  • @Victor76661

    @Victor76661

    2 ай бұрын

    Even if AGI comes up with all the silver bullets for mankind and ecological problems, applying it will still be a human choice. And we all know if it's not going to bring profit to some fat rats in suits, it's probably not going to me made (the application of solutions).

  • @kimkenny3300

    @kimkenny3300

    2 ай бұрын

    Better quality of output, too.

  • @josephvanname3377
    @josephvanname33772 ай бұрын

    Reversible computing is what we need for low energy AI (if people can actually engineer it).

  • @marckiezeender

    @marckiezeender

    2 ай бұрын

    reversible computing is a very long way away from being a viable way to reduce energy consumption. At this point, it's more of a thought experiment about thermodynamics. what we really need is room temperature superconductors.

  • @josephvanname3377

    @josephvanname3377

    2 ай бұрын

    ​@@marckiezeender We do not need to get anywhere near Landauer's limit in order for reversible computation to be useful. Landauer's limit is an absolute infimum energy expenditure limit, but in order to overcome thermal noise, one should spend thousands of times Landauer's limit. And we do not need general purpose reversible computation in order to make reversible computation useful and profitable. The only thing we need is for reversible computation to be better at a specific task (such as running a linear feedback shift register), and profit may come as a result.

  • @josephvanname3377

    @josephvanname3377

    2 ай бұрын

    @@marckiezeenderWe do not need to get anywhere near Landauer's limit in order for reversible computation to be useful. Landauer's limit is an absolute infimum energy expenditure limit, but in order to overcome thermal noise, one should spend thousands of times Landauer's limit. And we do not need general purpose reversible computation in order to make reversible computation useful and profitable. The only thing we need is for reversible computation to be better at a specific task (such as running a linear feedback shift register), and profit may come as a result.

  • @jameslynch8738

    @jameslynch8738

    2 ай бұрын

    Agreed, compression and decompression is a good example of this and AI could help solve the mathematical and materials science needed. Developing abstract vectors to represent the knowledge domain would greatly accelerate the technologies, as the logic could be embedded into smaller analog processors, perhaps even a photonic lattice to combine both logic and information. Resulting in extremely stable storage of information available at high speeds and low power. That's still somewhere further in the development chain but elements should be in development now.

  • @nunomaroco583

    @nunomaroco583

    2 ай бұрын

    Great advances, big costs....

  • @Ken00001010
    @Ken000010102 ай бұрын

    One of the key abilities among humans is the "thought experiment" made famous by Albert Einstein in which imagination is used to construct mental models of world situations to explore ideas. For the most part, current LLMs only process information when a question is put to them, they do not run through thought experiments on their own exploring what they can imagine. This is sometimes raise as an indication of their capabilities, however, when asked questions, the models are using compute cycles (thus energy) that are being paid for by some human organization for that purpose. If the models ran their own thought experiments, the question of paying for the cycles becomes important, and the negative possibility that such activity would swamp the servers of the world. Some have suggested that AIs should be allowed to set up enterprises, or "get jobs" to earn the money to pay for their own compute cycles. What could go wrong? ;-)

  • @not_milk
    @not_milk2 ай бұрын

    Image generation uses 1000x the energy of text generation. A picture really is worth a thousand words.

  • @TrimutiusToo
    @TrimutiusToo2 ай бұрын

    3:14 That clip of someone writing "Holotropic breathwork" in Russian was very distracting for me because i know the language...

  • @ispamforfood
    @ispamforfood2 ай бұрын

    First! 😛

  • @b43xoit
    @b43xoit2 ай бұрын

    An hour is 3600 seconds. So let's evaluate "1300 MWh". It's 4680000 MJ, which is 4680 GJ, i.e. 4.68 TJ.

  • @davidespinosa1910
    @davidespinosa1910Ай бұрын

    One ChatGPT query : 5 milliwatt hours (3:20) Driving 1 mile in an electric car: 346000 milliwatt hours (Common sense is _really_ expensive !)

  • @philiphumphrey1548
    @philiphumphrey15482 ай бұрын

    And yet a human being runs about 100 watts average (if that). Clearly 500 million years of evolution has come up with a far more energy efficient solution.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    This is not a correct assumption, Humans were evolved to survive in this environment, not to be efficient. We are very slow.

  • @FacePlantPROs117

    @FacePlantPROs117

    2 ай бұрын

    @@tedmoss Yes, it is. Per watt the human brain is far more efficient than transistors.

  • @Catalyst375

    @Catalyst375

    2 ай бұрын

    ​@@tedmoss Better to be "slow", smart and efficient than fast, dumb and inefficient.

  • @gianpa

    @gianpa

    2 ай бұрын

    Yeah but this is assuming that AI will not evolve into something more efficient. With current technology you need a lot of power yes, with future technology who knows...

  • @MediaCreators
    @MediaCreators2 ай бұрын

    My dear, we can run extremely powerful generative AI applications like SDXL or Mixtral 8x7b locally on a 4090 GPU with 600W peak power. The image generates in about 10 to 15 seconds, and the answer from the LLM requires about 4 to 10 seconds. We are talking about 1/1000 to 10/1000 of a kilowatt here. And the applications get more efficient by the day. Let's not hype things up.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    To bad you are not talking the same language. This is not hype, although there is plenty of that to go around.

  • @traumflug

    @traumflug

    2 ай бұрын

    Multiply this with a billion queries per hour (worldwide service!) and you'll see where the problem goes.

  • @tw8464

    @tw8464

    2 ай бұрын

    "My dear" could you get more condescending? Not a great way to try to start making a "point"

  • @nathanbanks2354

    @nathanbanks2354

    2 ай бұрын

    Mixtral 8x7b isn't as good as GPT-4, and SDXL is only better than Dall-E 3 because it can accept an image as input. I suspect the largest available models, like Lora, will always take tons of energy. Even Mixtral took a lot of energy to train since you people don't normally use 3-bit quantization until after it's trained. (Not sure how you managed to squeeze it onto a consumer GPU...my machine can only run it on the CPU making it unbearably slow.)

  • @lagrangianomodeloestandar2724
    @lagrangianomodeloestandar27242 ай бұрын

    I would agree with Nvidia and other companies, if it were not for being a supporter of non-digital hardware, photonic, neuromorphic or hardware based on Gibbs free energy or the real or physical processes that they emulate these digital circuits that could spend much less energy, but how to monopolize training/learning and sell inference is more interesting for hardware engineers...Instead of designing models alone, it could be possible to manufacture programmable meta-materials within circuits that contain these statistical properties and hybridize them...

  • @57thorns
    @57thorns2 ай бұрын

    Efficiency is basically just a way to get more for the same, not the same for less.

  • @michaelpalmer4387
    @michaelpalmer43872 ай бұрын

    Surely we can just ask AI to sort out all the problems with AI?

  • @astrovation3281

    @astrovation3281

    2 ай бұрын

    maybe in a decade

  • @neighbor9672

    @neighbor9672

    2 ай бұрын

    That’s the end goal. An AGI that is effectively able to be self aware and make self improvements. However the implications of such an entity staying under our control…. That’s a whole other topic.

  • @Kai-du2ub

    @Kai-du2ub

    2 ай бұрын

    Certainly, just as Baron von Münchhausen go out of the swamp by pulling his pigtail up with his own hands

  • @michaelpalmer4387

    @michaelpalmer4387

    2 ай бұрын

    ​@@neighbor9672just need another AGI to keep that AGI in check & another AGI... An infinite regress of AGI...

  • @tedmoss

    @tedmoss

    2 ай бұрын

    @@neighbor9672 Just pull the plug.

  • @madshorn5826
    @madshorn58262 ай бұрын

    The exponential energy requirements can explain the Fermi Paradox. AIs are supremely suited for space travel by not being squishy goo like us, but we see none. An explanation can be that the price of compute needed to run a general AI will exceed what a planetary civilization can afford. Given that the current toy models use 1% of our energy and we need that energy for other purposes this may be the reason.

  • @camelCased

    @camelCased

    2 ай бұрын

    Considering how biology can run so many AIs (brains) with so little power, wouldn't we be able to reach that one day? To compare AI vs humans, we should also consider how much energy a human consumes until it has learned everything it needs for a successful work career. But then humans cannot be copied, AIs can. So, we'll see how it goes. I'm sure AIs eventually will help us find solutions to make AIs themselves more energy-efficient.

  • @mqb3gofjzkko7nzx38

    @mqb3gofjzkko7nzx38

    2 ай бұрын

    A robotic probe doesn't have to be that smart.

  • @absalomdraconis

    @absalomdraconis

    2 ай бұрын

    ​@@mqb3gofjzkko7nzx38 : You actually _do_ want it that smart unless you're throwing a swarm at the target. What you _don't_ care much about is _speed._ Interstellar probes are decently suited to a "slow philosopher" type of AI.

  • @madshorn5826

    @madshorn5826

    2 ай бұрын

    @@camelCased Yes, you have a theoretical correct point, but there is a difference between _theoretically_ possible and _practically_ possible. We are a very long way away from making efficient hardware - and we are running out of time. IF we had infinite resources here on Earth there is little doubt in my mind we would be able to make AIs capable of making AIs, but we have already overstepped 4 out of 7 planetary boundaries (or is it 5?). We don't have unlimited time and resources on our current trajectory: If we don't make substantial changes to our societies the human race will be run in a few generations. If not completely, then at least as a technological species. IF the theoretical outcome were practically possible it should statistically have happened somewhere else already and we should see alien AIs all over the place according to Fermi. The fact that we don't and that we are close to our limits ecologically, yet far from efficient AI suggests to me that this is the great filter. The reason we haven't considered before is because our culture is based on the conflation of 'theoretical' with 'possible'. That was good for a growth mindset, but fairytales being nice and useful for a time doesn't make them true :-( I believe we can turn the tide and live better lives than today, just not anything like Star Trek.

  • @donaldhobson8873

    @donaldhobson8873

    2 ай бұрын

    Our current computers are nowhere near the limits of what is possible. Our current algorithms aren't efficient. Our current energy production is way lower than what is possible. Human brains exist, so clearly 20 watts is enough in theory. And humans in space are harder, but still possible.

  • @TallinuTV
    @TallinuTV2 ай бұрын

    I saw a research article not long ago talking about being able to cut the energy consumption (or maybe it was time required, which would have the same result) for training AI models by some pretty large percentage of their current value, something like 60-80%. I don't remember the details unfortunately. (I also don't remember if it said anything about post-training time/energy usage.)

  • @cookymonstr7918
    @cookymonstr79182 ай бұрын

    3:10 Wetware OCR & Translation: Holotropic breathing ; Energy demand: a slice of wholegrain bread.

  • @martingoldfire
    @martingoldfire2 ай бұрын

    The West need to understand that we have enough, our resources should be used helping the less fortunate. If we don't need it, we shouldn't have it. I'm tired of people's greed and ego, It's destroying our planet.

  • @tedmoss

    @tedmoss

    2 ай бұрын

    You are entitled to your opinion, others disagree, sometimes violently. The love of money is the root of all evil.

  • @carlbrenninkmeijer8925
    @carlbrenninkmeijer89252 ай бұрын

    It would be an insult to use AI waste heat to keep our cottage warm !

  • @user-hg7zv9pw9m

    @user-hg7zv9pw9m

    2 ай бұрын

    I don't honestly think that aI would mind

  • @nathanbanks2354

    @nathanbanks2354

    2 ай бұрын

    Data centers should always be built next to public pools.

  • @j6zAcdrhDF
    @j6zAcdrhDF2 ай бұрын

    at 3:08 "Холотропное дыхание" - what's that???

  • @mannydib
    @mannydibАй бұрын

    "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." Dune (1965)

  • @carlbrenninkmeijer8925
    @carlbrenninkmeijer89252 ай бұрын

    Will AI move to the deserts and use photovoltaic? With batteries nighttime can be bridged. Photovoltaic is so incredibly cheap. Nuclear reactors are a bit outdated and they merely produce heat, they have pumps, seals, many valves etc etc. Ask AI😂

  • @yeroca

    @yeroca

    2 ай бұрын

    There is a company working on a fusion reactor that produces electricity directly without the need for the heat -> electricity conversion.

  • @carlbrenninkmeijer8925

    @carlbrenninkmeijer8925

    2 ай бұрын

    @@yeroca I hope that it will work, thanks!!

  • @Thomas-gk42

    @Thomas-gk42

    2 ай бұрын

    PV in deserts has problems, one is dust, so bot-guys, "go and dust your solar panels" (SH)

  • @carlbrenninkmeijer8925

    @carlbrenninkmeijer8925

    2 ай бұрын

    @@Thomas-gk42 Thank you, I see. My wording was sloppy. I meant low latitudes with low cloud cover.

  • @msromike123

    @msromike123

    2 ай бұрын

    "With batteries nighttime can be bridged" is a blanket statement without much thought given to reality. For now, and the foreseeable future, batteries cannot bridge nighttime use. In fact, there may not be enough rare earth metal on the planet to ever be able to use batteries to "bridge" 100% renewable energy production. As far as nuclear reactors being outdated, that only depends on how you look at it. This is mostly due to pollical barriers to development and deployment of modern nuclear reactor types. As far as nuclear power being outdated, well FM radio is outdated, and it still works fine.

  • @burtonmiller
    @burtonmiller2 ай бұрын

    We are the dawn of the AI age. If we can get 10x efficiency from software and another 10x from hardware, then this problem is 100x smaller. And this will happen.

  • @williamforsyth6667

    @williamforsyth6667

    2 ай бұрын

    "10x efficiency" Just look at the latest 1.58 bit LLMs paper.

  • @niccreznic8259

    @niccreznic8259

    2 ай бұрын

    Just hope we don’t 1000x world destruction efficiency!

  • @DR_1_1

    @DR_1_1

    2 ай бұрын

    For now the estimation is that computing energy needs will DOUBLE in 2 years. It's like demographics, birth rates are lower but the whole population is rising so fast that growth is just as fast as it was in the 1970's, when birth rate was at its highest.

  • @burtonmiller

    @burtonmiller

    2 ай бұрын

    @@niccreznic8259 That will be for the robots to figure out:)

  • @nathanbanks2354

    @nathanbanks2354

    2 ай бұрын

    Unless people prefer spending 100% as much for something 100x as smart...

  • @mwolfe3219
    @mwolfe32192 ай бұрын

    I loved the German Monopoly game cameo.

  • @GrumpDog
    @GrumpDog2 ай бұрын

    I find it hilarious that all these anti AI reports about it using a lot of energy, came out right after NVIDIA announced their latest chips, which are an order of magnitude more energy efficient than the previous gen. Seems the trend isn't as bad as naysayers claimed.

Келесі