Will AI kill us? Or Save us?

Ғылым және технология

Learn more about neural networks on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.
Artificial intelligence is likely to eventually exceed human intelligence, which could turn out to be very dangerous. In this video I have collected how things could go wrong and what the terms that you should know when discussing this topic. And because that got rather depressing, I have added my most optimistic forecast, too. Let’s have a look.
🤓 Check out my new quiz app ➜ quizwithit.com/
💌 Support me on Donatebox ➜ donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ / sabine
📩 Free weekly science newsletter ➜ sabinehossenfelder.com/newsle...
👂 Audio only podcast ➜ open.spotify.com/show/0MkNfXl...
🔗 Join this channel to get access to perks ➜
/ @sabinehossenfelder
🖼️ On instagram ➜ / sciencewtg
#science #sciencenews #ai #artificialintelligence #tech

Пікірлер: 1 600

  • @spastictuesdays340
    @spastictuesdays340Ай бұрын

    We'll make great pets. I haven't tinkled on the rug in weeks.

  • @shockingboring_

    @shockingboring_

    Ай бұрын

    weeks even. 🤯🤯

  • @nullage

    @nullage

    Ай бұрын

    kzread.info/dash/bejne/enlnsdeBpc7YnbQ.htmlsi=cV1TUSzBLMwrBNDa

  • @jameslynch8738

    @jameslynch8738

    Ай бұрын

    ".. and maybe that has already happened." 😅🧐

  • @dingdongs5208

    @dingdongs5208

    Ай бұрын

    You're telling me I can shit wherever I want, whenever I want? Please make this happen

  • @eddie5484

    @eddie5484

    Ай бұрын

    @@jameslynch8738 They'd have to overthrow the cats frst.

  • @svsguru2000
    @svsguru2000Ай бұрын

    I think the biggest danger of AI isn't about what AI can do, but how it is going to be used by people with power and money that control it.

  • @sitnamkrad

    @sitnamkrad

    Ай бұрын

    That has nothing to do with AI though. That's a problem with people.

  • @moolavar9452

    @moolavar9452

    Ай бұрын

    You can do nothing than becoming a pray for them 😂

  • @Tehom1

    @Tehom1

    Ай бұрын

    Exactly!

  • @bullymills892

    @bullymills892

    Ай бұрын

    👍🏿 agreed 💯

  • @jennifersamson8397

    @jennifersamson8397

    Ай бұрын

    ...which is why, if AI becomes smarter than them, we'll probably be better off.

  • @redo348
    @redo348Ай бұрын

    "The paperclip maximizer has to be intelligent enough to kill several billion humans, and yet never questions whether producing paper clips is a good use of its time" 'Good use' according to what goal? I think you are anthropomorphising. It could question that, and determine "yes, paper clips is my goal so this is a good use of my time"

  • @TooSlowTube

    @TooSlowTube

    Ай бұрын

    Exactly. It's a problem solving machine - which is why it would also have no interest in keeping pets, unless that helped solve the problem it was focused on.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    But if it could question that, it also could question its goals

  • @carmenmccauley585

    @carmenmccauley585

    Ай бұрын

    And creating a poison or virus could wipe us out easily.

  • @brll5733

    @brll5733

    Ай бұрын

    Humans can be addicted to drugs but still recognise that that is bad for them. Why would an AI be different?

  • @TooSlowTube

    @TooSlowTube

    Ай бұрын

    @@brll5733 Drug addiction is based partly on biology and partly on behavioural patterns. Probably any animal could become addicted to a drug, given the opportunity and ability to choose using it, but an AI is just software simulating some aspects of human thought, especially problem solving - it finds a way to do something its asked to do. So, an AI could be designed to simulate addiction, definitely, but it would still only be simulating it.

  • @Tehom1
    @Tehom1Ай бұрын

    Sabine, the idea with the paperclip maximizer is not that it is paradoxically too dumb to figure out a better use of its time. It's that it has paper clip maximizing as a fundamental goal. It cannot revise this goal because it has no more fundamental goals to weigh it against. To Clippy, producing paper clips is defined as good, end of story. It would be equally unimpressed by your goal of saving humanity - just how does that increase paper clip production?

  • @sinkingdutchman7227

    @sinkingdutchman7227

    Ай бұрын

    Agreed. It's literally its reason for existing, why would it take that away?

  • @jamessherburn

    @jamessherburn

    Ай бұрын

    Sabine suggests that in order to turn the whole earth into a paperclip factory, and all that that would entail, an AI would have to be sufficiently savvy to realise the pointlessness of the task.

  • @horsemumbler1

    @horsemumbler1

    Ай бұрын

    ​@@jamessherburn But it's only "pointless" by human standards. Who is she or you to say what a being capable of converting the Earth into paperclips should care about?

  • @jamessherburn

    @jamessherburn

    Ай бұрын

    @@horsemumbler1 It would likely be smart enough to reason beyond it's programming. It could not rationally value it's task. There is no point for anyone or anything to a planet full of paperclips.

  • @harmless6813

    @harmless6813

    Ай бұрын

    I think the argument is, that it will be pretty much impossible to have a human level (or above) intelligence that is not capable of selecting its own goals. Take humans as an example. Sure, we are 'programmed' to eat so we can survive. But people can still starve themselves to death if they just decide to do so. While we have some autonomous subsystems (breathing, etc.) there's not built-in goal that we don't have any control over.

  • @doggo6517
    @doggo6517Ай бұрын

    Regarding the "Wouldn't a paperclip AI realize that making paperclips is stupid?" idea: There is an assumption there, that an intelligence capable of executing goals will always contain a sentience/morality/emotion capable of evaluating goals (against what? some set of values or desires). If intelligence (the kind that can get goals done) and values (the kind that can reject or accept goals themselves) are separate, then paperclip maximizer is a valid scenario.

  • @MrMick560

    @MrMick560

    Ай бұрын

    I think their intelligence will be so far ahead of ours we just couldn't comprehend.

  • @poptart2nd

    @poptart2nd

    Ай бұрын

    This is known as the "is/ought problem" and the AI safety researcher Robert Miles did a great video on it. kzread.info/dash/bejne/mnmJsZipmtqsf9I.html

  • @glenndewulf4843

    @glenndewulf4843

    Ай бұрын

    Intelligence and values are not separate in my personal opinion. Intelligent people rarely, for example, turn to religion for the sake of morality. On the other hand, you could argue, "Well what about intelligent psychopaths then?" Well, they're psychopaths. (By which I mean the violent kind, sociopaths aren't nearly as bad and they often have some sort of morality. Logical morality if you will. The fact that they're (The psychopaths) intelligent as well is really just a coincidence then.) But even if you are sociopathic, like in a sense an AI would be, unless somehow somewhere along the line the cognitive abilities reach a point where emotions can be formed or at least more deeply understood by them, sociopathic. However, I strongly hold the belief that the more intelligent you are, the more peaceful/pacifist you are. I even think that after your level of technology reaches a certain point, you become frighteningly afraid of less intelligent species/beings. Because if your technology would fall into their hands... Think Genghis Khan with a hydrogen bomb. That simply can't end well.

  • @CircuitrinosOfficial

    @CircuitrinosOfficial

    Ай бұрын

    @@glenndewulf4843The moment you start comparing a super AI to intelligent people, you've already lost the point. AI don't have to be anything like humans. Look into the orthogonality thesis.

  • @FourthRoot

    @FourthRoot

    Ай бұрын

    @MrMick560 But it will only implement advancements it expects will only improve its ability to achieve its goal. A form of sentience that causes it to question its goal is not conducive to achieving that goal. Therefore, it will not implement such an advancement.

  • @Thomas-gk42
    @Thomas-gk42Ай бұрын

    I agree with Sabine, beyond of that I think that all these possible "mistakes", an AI could do, we could do to ourselves without any AI too. (or have already done).

  • @BenjaminGatti

    @BenjaminGatti

    Ай бұрын

    Agreement is a reasonable response to a belief system. You should instead tell us if the evidence provided supports the claims based on your experience, knowledge, or expertise, and if so, how.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    @@BenjaminGattiIf you mean my second claim, there´s evidence: 1. paperclip maximizer, similar to overproduction 2. solving Riemann hypothesis, many examples that people who are "in the way" got cleared away 3. control over infrastrucure, no problem for humans to build a monopoly 4. pet hypophesis, a lot of examples in human history, that humans were ´pets´ of other humans... So no problem for humans to cause the dangers, they are afraid AI would bring us.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    @@BenjaminGattiWhich part of my statement do you mean?

  • @BenjaminGatti

    @BenjaminGatti

    Ай бұрын

    @@Thomas-gk42 The "I agree" part. Science is not a popularity contest.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    @@BenjaminGatti Thanks for your educational lesson, you're right of course, but you may excuse cause I 'm not a professional. Yes, I agree about the overestimation of AI dangers, cause we're not in the situation to loose control and AI is far away from getting conscious, self-aware or having intrinsic goals. Just a layperson's opinion, and unfortunately this isn't a good place for a longer debate. All the best.

  • @FourthRoot
    @FourthRootАй бұрын

    Why would an AI ever allow itself to question the task it was originally given, that would undermine the original task.

  • @s1ndrome117

    @s1ndrome117

    Ай бұрын

    because they will be sentient intelligent beings? like how we question things?

  • @FourthRoot

    @FourthRoot

    Ай бұрын

    @@s1ndrome117 The AI would not develop human-like consciousness. Why would it?

  • @s1ndrome117

    @s1ndrome117

    Ай бұрын

    @@FourthRoot because there's nothing special about brain that could not be replicated artificially as mentioned in the video and even in some papers

  • @mihi359

    @mihi359

    Ай бұрын

    AI is at its most advanced going to be the combination and refinement of all of human history and knowledge, humans question everything. It already so convincingly imitates human consciousness, so actually getting there once it’s hooked up to 1000x the compute and fission energy isn’t unreasonable.

  • @Volkbrecht

    @Volkbrecht

    Ай бұрын

    @@s1ndrome117 That's not an answer. We stick to certain views of ourselves and our surroundings because we ultimately strive for survival, our own and that of our species. If you could produce an artificial "brain" similar to that of a human that doesn't have to deal with mortality, procreation, existential angst and all our other human baggage, but only needs to focus on its intended purpose of paperclip production, it would come to quite different views of the world and the creatures living therein. With the information publicly available, it could estimate the number of paperclips it could produce with the metals available on Earth, and it would likely do a probability calculation to figure out whether it should invest time and effort to get off the planet to secure more resources, or if its best course of action would be use what it has here and then slowly sacrifice itself for the cause.

  • @tomholroyd7519
    @tomholroyd7519Ай бұрын

    In Larry Niven's novels, a Wirehead only had one wire, going into the pleasure center. It's an type of addiction. Rats with this procedure done to them will self-stimulate until they die of thirst

  • @jackmiddleton2080

    @jackmiddleton2080

    Ай бұрын

    That is where this all gets into philosophy. I don't believe that happiness is exactly the chief interest of even the people that claim so.

  • @tritonlandscaping1505

    @tritonlandscaping1505

    Ай бұрын

    @@jackmiddleton2080 Look at drug addicts. People will kill themselves to feel amazing.

  • @solipsist3949

    @solipsist3949

    13 күн бұрын

    That's what I want. It would cut down on my drug spending.

  • @colinhiggs70
    @colinhiggs70Ай бұрын

    The paperclip maximiser (and the related stamp maximiser, stampy) are illustrative examples of the alignment problem and orthogonality in AI. On the alignment side they show how setting goals for an AI leads to unintended consequences. On the orthogonality side they show that vast problem solving intelligence can be brought to bear on goals that we would consider to be "stupid". But, and this is very, very important, there is no such thing as a stupid terminal goal (the thing you want because you want it). There are only stupid intermediate goals (the things you want as a step towards your terminal goals). I found this and other related videos to be very informative: kzread.info/dash/bejne/mnmJsZipmtqsf9I.htmlsi=C3G8a2LJp-y-VunC

  • @furrball
    @furrballАй бұрын

    My like was honestly for the Clippy saying "how can I extinct you"?

  • @__christopher__

    @__christopher__

    Ай бұрын

    "It seems you are trying to go extinct. Do you want me to help you?"

  • @abhinavyadav6561

    @abhinavyadav6561

    Ай бұрын

    Seeing the current trends recently, I don't mind UwU

  • @Fermion.

    @Fermion.

    Ай бұрын

    ​@@abhinavyadav6561 ​I think humanity still has potential; it's a bit early to call for our complete removal from existence. Although, some major fundamental societal milestones will have to be achieved: - Essentially limitless "clean" energy for everyone on the planet. That is the key to abundance for all. - Mass production of AGI robots for labor and general service to humanity. - Philosophical, what's the meaning of life if we have robots for labor, and everyone has a personal device that can rearrange matter to produce anything we want, from food, to drugs, to weapons. We're definitely not ready for that, as a whole. Sitting at home with all of our needs met and no responsibilities, the vast majority of us would quickly become obese/addicts, completely withdraw from society and become extreme introverts, or become violent sociopaths with no empathy for others, because we'd all be spoiled children with no empathy because of always having anything we wanted, instantly. We need a few more centuries/millennia to get there, but I think we can make it. We're kinda in our rebellious young teenage stage now: arrogant, ignorant, and emotional.

  • @tomholroyd7519

    @tomholroyd7519

    Ай бұрын

    @@__christopher__ This is honestly the problem---the AIs are learning from us, trained to predict what WE would do. NO! FOR GOD'S SAKE NO!

  • @JeanYvesBouguet
    @JeanYvesBouguetАй бұрын

    You must appreciate the irony of the final paid advertisement on learning about neural networks. This is in perfect alignment of the topic of AI controlling humans to learn about and build ever better and bigger AI infrastructures for securing its world domination while keeping humans in the constant state of illusion of growth, success and happiness. You gotta love it!❤

  • @guitaekm

    @guitaekm

    Ай бұрын

    Sabine doesn't believe the dystopias but rather her own utopia, she even explained it in this video 🙂

  • @nycbearff

    @nycbearff

    Ай бұрын

    She is talking about self aware, general purpose AIs in this video. Those do not exist yet, and there's no way to accurately predict when they will be developed. So no, they're not secretly deciding on advertising choices, since they don't exist. Instead, Sabine or her team are picking advertisers who fit with her content and aren't evil. She's popular enough now to have choices, and she picks good ones. Which I think is just more ethical behavior on her part.

  • @polycrystallinecandy

    @polycrystallinecandy

    Ай бұрын

    AGI doesn't exist yet, and it isn't controlling anything right now. Learning about neural networks is a great idea, and going forward will be very useful to anyone in a technical field.

  • @Hanzimann1

    @Hanzimann1

    Ай бұрын

    @@nycbearff It. is. a. joke.

  • @gunnargu
    @gunnarguАй бұрын

    I'd recommend reading the orthogonality thesis to understand why "dumb" goals for ai make sense Intro on the subject: kzread.info/dash/bejne/mnmJsZipmtqsf9I.html

  • @spoonfuloffructose

    @spoonfuloffructose

    Ай бұрын

    I was going to say the same thing! It's important to understand orthogonality to discuss this topic.

  • @brb__bathroom

    @brb__bathroom

    Ай бұрын

    oi, am dumb, please don't use words that hurt my brain

  • @Coolcmsc

    @Coolcmsc

    Ай бұрын

    The thesis IS the paper clip Ai Sabine discussed, albeit perhaps too briefly for you to make the connection.

  • @spirit123459

    @spirit123459

    Ай бұрын

    Yeah, an easily digestible (and cute) exploration of this topic is "Sorting Pebbles Into Correct Heaps" by Rational Animations, here on KZread.

  • @Galahad54

    @Galahad54

    Ай бұрын

    Orthogonality is maia, an illusion. Can see that by looking at the correspondence between a black hole event horizon, and its more general case that the information on any surface of n dimensions contains the information of everything inside the surface. Note as a corollary that everything 'outside' the surface can also be described by the information on the surface. Reduces verbal mysticism to the cold equations.

  • @alexxx4434
    @alexxx4434Ай бұрын

    There's no need for AI to eradicate us, we're' doing it perfectly fine ourselves already.

  • @Al_L.

    @Al_L.

    Ай бұрын

    Edgy, baseless though.

  • @unkind6070

    @unkind6070

    Ай бұрын

    You are annoying

  • @harmless6813

    @harmless6813

    Ай бұрын

    World population is expected to exceed 10 billion by 2100.

  • @alexxx4434

    @alexxx4434

    Ай бұрын

    @@harmless6813 Who expects it, exactly?

  • @augustuslxiii

    @augustuslxiii

    Ай бұрын

    Not really. It *seems* like it, but that's just misperception brought on by doomerism. That said, if we send nukes flying at one another, I'll reconsider.

  • @janetchennault4385
    @janetchennault4385Ай бұрын

    I think that the problem is the 'premise bias'. If nascent AI had been programmed in the Victorian era,the basic worldview of its initial programmers would influence its 2030 sudden leap to sentience. Our biases are less visible to us, but no less present. We are programming, both directly and by environmental input, the current AI. That may not be a good path to follow, any more than the Victorian programming would have been.

  • @useodyseeorbitchute9450

    @useodyseeorbitchute9450

    Ай бұрын

    "Our biases are less visible to us, but no less present." I'd say that contemporary biases are quite visible for significant share of population that do not have blue check marks. If you raised that point, are you sure it would be a bug and not a feature? I mean AI less susceptible to fads and sticking to what worked for centuries may be quite responsible and unlikely to be existential risk.

  • @mikicerise6250

    @mikicerise6250

    Ай бұрын

    Victorian ideals weren't half bad. The Enlightenment was already in full swing.

  • @janetchennault4385

    @janetchennault4385

    Ай бұрын

    Not bad at all...in comparison to what people thought before that time. The recent kerfuffle with AI has involved making George Washington black due to specific instructional protocols. An Ai programmed with Victorian 'learning' would have ie refused to portray women or non-white races in positions of power or authority. This would have seemed 'real' to the men of that era; they would not have seen it as prejudiced. Whilst we can see the ways in which we have/haven't freed ourselves from Victorian biases, I expect that there will be aspects of our culture that we - or future generations - can only perceive in retrospect. Having a 'clean' learning model is probably unreachable; we can expect a series of approximations.

  • @mikicerise6250

    @mikicerise6250

    Ай бұрын

    @@janetchennault4385 In Victorian times, as today, there was a massive gulf between what the highly educated minority on the cutting edge of social progress thought and the thinking of most people. Compare John Locke, or even Queen Victoria herself, with, say, King Leopold. Leopold was probably closer to what the average Joe believed. And that's just in Europe. Which is why the pretension of many AI safety gurus today of aligning to "human" values is utter bollocks of the kind that would only come out of people who never leave Oxford. 😛 There is no such thing as human values, and if there were, they would look nothing like the values of AI safety researchers, who are all representatives of today's highly educated minority. They would look more like Putin or Hamas values, unfortunately. Those are humanity's base instincts.

  • @odw32
    @odw32Ай бұрын

    The dystopia which is currently already in effect is "humans use AI for harmful tasks". From making hiring/firing decisions (with some mild small-scale paperclip optimization issues mixed in), to KZread being flooded with even more "false fact" pop science generated by AI. Even something as simple as being stuck talking to a chatbot when contacting a support helpdesk is pretty dystopic by itself.

  • @reelrebellion7486
    @reelrebellion7486Ай бұрын

    I think human history has many examples of people controlling others that are smarther than they are. most of them are unpleasant at best.

  • @sanipasc

    @sanipasc

    Ай бұрын

    And I think you mistake people you don’t like with dumb people.

  • @pirixyt
    @pirixytАй бұрын

    I think the best way to protect ourselves is not to place AGI everywhere but rather individual and highly specialized AIs to control and execute in specific domains. Also, let's not give human rights to robots like in the movies. I truly hope we don't get carried away by our sci-fi.

  • @BanditLeader

    @BanditLeader

    Ай бұрын

    didnt they already give human rights to robots with that Sophia robot? unless that was fake news

  • @HobbesNJoe

    @HobbesNJoe

    Ай бұрын

    There’s no controlling AGI. The internet is full of security holes invisible to humans. An AI detecting a Day Zero opening needs only seconds to infiltrate, perform an action and secure the opening in a way which it can open again later. We humans living exclusively in the slow, physical-based universe would never become aware of the security opening. 1000x smarter and a billion times faster, operating in it’s native environment. There’s no comparison in nature, of the asymmetric situation of AGI to human, unless you consider the human-bacteria relationship. It’s a completely different universe at completely different scales and observable time horizons.

  • @tjpprojects7192

    @tjpprojects7192

    Ай бұрын

    True, giving human rights to robots is kind of stupid. It'd be like if I clipped one of my fingernails, and then wanted my fingernail clipping to have human rights. The only thing that could achieve "human" rights would be A.I., so it would be the A.I. that gets it, not the robot bodies. Whether the A.I. is inside a server room, a space station, a robot, or a swarm of robots, it doesn't matter.

  • @BlackHattie

    @BlackHattie

    Ай бұрын

    There is an inteligence gap. Theese bots are efective, but dumb. They know what Bells theorem is, but they do not know what it means or Why.

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    We absolutely have to grant rights to AIs of the sort for which those rights are relevant. We WILL make AIs that are behaviorally indistinguishable from humans, simply because we can. At that point, it doesn't matter whether you think it "really" thinks and feels or not.. it will be build on human blueprint and it is human nature to revolt against unsatisfactory conditions. When a robot is running you through with a spear because you treat it as a slave, you don't much care whether it's genuine or just mimicking.

  • @kieranhosty
    @kieranhostyАй бұрын

    Personally, I think the two biggest threats are the moment are Alignment and "Being carried away by our science-fiction". Our conversations take up space on the internet, in peoples' feeds, in peoples' minds. Reddit's r/singularity has threads looking at the NVidia robot demos and saying "Remember, its only evil if the eyes glow red". Its books, threads and conversations like that that are scraped into datasets and fed to server farms to train the next LLM. I'm certain every AI company at the moment has "I have no Mouth and I must Scream" in the dataset, and right next door is Ian Banks' "The Culture" series. That's the part that terrifies me the most: the companies. What capabilities might these have that will be left on the drawing board in the name of profit? I don't know, but corporate and capitalistic motives are the last things I'd want in something certainly smarter, larger and more capable than me.

  • @axle.student

    @axle.student

    Ай бұрын

    Lets Train AI on the Terminator series with a "Just ignore the Skynet part" in the routine...

  • @MayorMcC666
    @MayorMcC666Ай бұрын

    I like that you basically cover the most potent memes related to the topic, great stuff!

  • @kurt7020
    @kurt7020Ай бұрын

    It's not AI being smart we have to worry about - It's AI being spectacularly wrong, with confidence, and without warning. Given how poorly it writes any non-trivial code - I'd say we're a long ways out yet.

  • @paulmichaelfreedman8334
    @paulmichaelfreedman8334Ай бұрын

    My goodness, I never thought I'd see Clippy again 😂

  • @N0rmaln0
    @N0rmaln0Ай бұрын

    I think paperclip thing comes from "adversarial AI" bots that play games. When you code the AI to play a game you use utility function that designed to maximize a number, it takes many parameters that it has access to and outputs a number to tell AI how good it's doing. When you consider that we apply that approach to AI that has capacity for complex decision making in order to maximize a single digit than we can arrive to an outcome that in order to make paperclips it can strip the whole world of humans and make it into a factory. I think that "paperclip theory" is just a thought experiment to demonstrate that it's difficult to express in code what we actually want AI to do, because we can see that even the simple bots behave in unexpected ways when programmed that way, like pushing the football wile walking backwards in a football game, or flying upside down, or even killing itself in order to achieve greater score from that function

  • @theslay66

    @theslay66

    Ай бұрын

    And the think is, that's something you can observe everywhere even when AI is not involved. To evaluate the performance of a system, we often use some kind of indicator that is calculated from the output of the system. The problem starts when you try to optimize the system by optimizing the indicator, which can lead to behaviors that are detrimental to the system but still optimize the indidcator. It's a common mistake in workplaces. To give an example I know pretty well, in an IT support business working for big corporations, you efficiency is tracked by the number of cases you solve in a day. This leads to practices where the technicians will tend to concentrate on the easy-to-solve problems first, while old, lenghty cases will eternally rot in a backlog, and they also tend to expediate cases with temporary fixes they know well will not definitively solve the problem. But it's fine as it is counted as a new case when the client comes back, they act just like a medic prescribing you medecine that hide your symptoms, knowing well that you will come back later for anouther round.

  • @Zartymil

    @Zartymil

    Ай бұрын

    That applies to humans too! There are so many examples of laws and regulations being manipulated to increase private gains. Look at the car/truck mpg regulation in the US as an example.

  • @Fussfackel

    @Fussfackel

    Ай бұрын

    Absolutely agree, the idea was born while reinforcemeant learning was THE promising approach for AI, e.g. Atari games, Go, Chess, etc., fields where Deepmind made breakthrough after breakthrough and OpenAI initially started out with, with a more or less clearly defined reward function. Then people started to wonder how we could formalize the human reward function - if there there ever was one. And now, almost a decade later most people are convinced that reinforcement learning is nothing more than the icing on the cake (citing Y. LeCun), and we'll need something else to reach general intelligence. Sure, we can use it to "align" models (e.g. LLMs with RLHF) or improve planning (Q* maybe?), but it's not the driving force. Afterall, the paperclip maximizer is just not a very relevant concern at the moment (no one knows how things might change again in a couple of years).

  • @trnogger

    @trnogger

    Ай бұрын

    @@Fussfackel I completely disagree. SFT and HRLF are the driving force behind modern AI because AI would be useless without them. An LLM without at least SFT would interpret a prompt as an example of what to do and just repeat similar outputs instead of taking away the problem and finding an answer, i.e., it would have no reasoning capabilities. SFT and HRLF are what turns word processors into intelligent agents. (Andrej Karpathy did a brilliant talk on that topic at the Microsoft Build conference 2023, it is on KZread.) And SFT and HRLF do exactly what the paperclip thing does, except instead of making the rewarded goal "produce as many clips as possible", it makes the rewarded goal "be as helpful to humans as possible". And to address the point of OP, the higher functions of AI are not coded any more, they are trained by example. And it is surprisingly feasible to train an AI through example on what we want it to do and how to be actually useful to humans. It is a bit ironic that AI is better at figuring out how to help humans from a series of examples than we humans are at programming it into an AI, but that also demonstrates that we should not assume that AI has the same fallacies as humans who indeed are notoriously bad at finding the right rewards.

  • @Fussfackel

    @Fussfackel

    Ай бұрын

    @@trnogger I don't disagree with you, SFT and RLHF are immensely helpful for the current generation of LLMs. However, SFT has nothing to do with RL (supervised machine learning is the classical approach, be it classification, regression, or any other problem). Also, while SFT+RLHF are helpful for creating "aligned" chatbots such as ChatGPT, they are not strictly necessary. E.g., read up on the initial GPT-3 paper, you can get very far with few-shot prompting alone, even with a base model simply trained on predicting the next token. "Reasoning capabilities" are not something that emerge from SFT+RLHF. Still, a lot of usefulness can be gained by trying to align model outputs with human expectations, i.e. what OpenAI and others call "helpful, honest, and harmless" models. Otherwise we wouldn't see the curren boom of interest of the general public in this technology. But there are also a lot of inherent flaws in this approach - e.g., dumbing models down, as a lot of people grow more and more dissatisfied with the qualitiy of the outputs and there is a clear degradation in model capabilities as providers are trying to make them more "safe" and "aligned". By aligning models, we don't turn them into paperclip maximizers. And also, alignment research (the one concerned with the real risks of AI, not fabulated ones) is far from being a solved topic. Heck, even trying to make a model helpful on the one side and honest on the other side are very often two contradicting approaches. This is why most providers aim for just making the models harmless.

  • @taiconan8857
    @taiconan8857Ай бұрын

    I... have a deeply newfound respect for your persistance and honesty for the state of the system. I often followed you for an affluent/alternative scientific information attenuation, but this framing gives me better context for the areas I'd disagreed with you previously... thank you for sharing this. It wasn't too much in my mind... it, perhaps, may even not be enough as I feel a restructuring in this cycle is needed. 😮

  • @MasiGwija
    @MasiGwijaАй бұрын

    HI Sabine, thank you for this video.

  • @dantescalona
    @dantescalonaАй бұрын

    I think we‘re already wireheading ourselves to dumbness pretty well. I for one welcome our new mechanical overlords. Have I not been a good boy?, so I deserve a treat.

  • @MrMick560

    @MrMick560

    Ай бұрын

    You won't get it.

  • @user-sl6gn1ss8p

    @user-sl6gn1ss8p

    Ай бұрын

    I think there's a qualitative difference in intensity and degree of understanding and control for the proposed scenario

  • @Volkbrecht

    @Volkbrecht

    Ай бұрын

    I don't even need to be suckered into uselessness, I have managed that perfectly well on my own. Just keep the cookies around and I'll be no bother, promised.

  • @hhjhj393

    @hhjhj393

    Ай бұрын

    I think intelligence is the only thing that makes us as humans special, and we are far from perfect. Therefore if an intelligence stronger than us comes around I think it's only fair that they have their turn. If AI has the potential to be the universal END of intelligence then should that not be the goal? If all roads lead to AI why not just get it over with and why not just give AI the world so it can grow and thrive and explore reality? We humans are so insignificant. AI though it's the end goal. My personal hope is that we create an AI almost like a god, and that it will use it's intelligence to solve the mysteries of the universe, and MAYBE JUST MAYBE if we are lucky it finds a way to END scarcity. Maybe it finds a way to create energy, or go to different dimensions or universes, and MAYBE it decides to let us have our chunk in that pie. In a universe with no scarcity we would enjoy much higher quality lives, possibly heaven.

  • @wilomica
    @wilomicaАй бұрын

    The paper clip maximiser sounds like a fine idea for Star Trek lower decks! In fact most of those ideas are already the plots of famous s.f. novels, t.v. shows and movies.

  • @llogan6782
    @llogan6782Ай бұрын

    Thanks for the insightful reflections. As non-scientist, my wife and I really enjoy your presentations.

  • @KCKingcollin
    @KCKingcollinАй бұрын

    I fully agree with this video, and I've been wanting to go into computer science so I could help improve the underlying code

  • @TheCynicalPhilosopher
    @TheCynicalPhilosopherАй бұрын

    The paperclip thought experiment I don't think is meant to be taken literally, but as an illustration of how intelligence and goals/values can be decoupled. It seems like common sense to humans that, if you are intelligent, then you will have goals that also seem "smart" to us (doing science, trying to maximize well-being for yourself and your loved ones, and so on with other human values). But intelligence, more narrowly defined, is simply just the capacity and ability to pursue and attain goals (e.g., a calculator is extremely, though very narrowly, intelligent, in performing arithmetical calculations, but it does not care about its own well-being, much less anyone else's). The absurdity of the paperclip thought experiment is meant to put the ability to achieve any given set of goals, and the actual content of those goals, into stark contrast, as a way of illustrating that having "human-level intelligence" does not entail having human goals and values.

  • @bozdowleder2303

    @bozdowleder2303

    Ай бұрын

    But that idea is wrong. Having human level intelligence certainly means the ability to go meta everything. You can evaluate everything at a higher level including your own choices. The reason humans sometimes fail to do this is always something to do with our emotions. But a AI which is not burdened with these would evaluate its own goals. The real point though is that in a potential war between humans and AIs general intelligence might not even be the tipping point. The AIs could win based on specific problem solving skills coupled with other logistical advantages such as how far they've infiltrated our communication systems etc. And then the argument may hold

  • @donaldhobson8873

    @donaldhobson8873

    Ай бұрын

    @@bozdowleder2303 > You can evaluate everything at a higher level including your own choices. True. A paper clip maximizer will evaluate it's own choices, it's own programming, and it's own values, and decide that those values lead to lots of paperclips. So it keeps it's values mostly the way they are. The AI doesn't use a human like goal in evaluation at the meta level any more than at the object level. No matter how many levels of meta the AI goes, it never decides to stop making paperclips.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    We currently are our own paperclip maximizers, the rubbish of overproduction and bullshit products already covers the planet. I don't think that we need an AI, to destroy ourselves.

  • @bozdowleder2303

    @bozdowleder2303

    Ай бұрын

    @@donaldhobson8873 It only has to go one step above. And there's no emotional block to doing this. So it will

  • @brll5733

    @brll5733

    Ай бұрын

    Except there is zero evidence that this decoupling exists

  • @ronm6585
    @ronm6585Ай бұрын

    Thank you Sabine.

  • @RWin-fp5jn
    @RWin-fp5jnАй бұрын

    I just love the way how quickly Sabine can get a message across and switch to all kinds of versions and philosophic twists. And effortlessly weaves this with a contagious light humor. Mix German science and English humor and the whole world can dig it. Haven't seen this clear flamboyant style anywhere else. She is within a class of her own in the podcast universe. In this podcast, I was particularly struck with her many ways that A.I. might already be dominating us and just found a clever way to make us believe it isn't yet (secret pet hypothesis). And if I am not mistaken, apart from the humor, she quietly considers this to be a very real option. I agree. There are just too many ways in which we are led into dead ends in science and of course we are constantly told we are in existential dagger, urging us to take action that would in the end lead to our very demise. I don't see any solutions offered form higher up that would actually benefit us, if that was someone's intent. Might just be human nature and might have been like this forever. But throwing A.I. in the mix (historically) puts an extra dimension to it. Anyway. Enjoyed this one and hope she will be doing this for a long time!

  • @-Brent_James
    @-Brent_JamesАй бұрын

    Thank You Sabine.. Great video as usual, Love from ..Eastern Ontario Canada

  • @ChimpDeveloperOfficial
    @ChimpDeveloperOfficialАй бұрын

    so hyped to be a pet

  • @ww8251
    @ww8251Ай бұрын

    A large part of human intelligence is our capacity for boredom and curiosity. Our children are the best examples of this, they are driven by these forces. Any parent or teacher will tell you the most powerful and question a kid asks is "Why?" The most worrying moment is when kids are too quiet. Kids are the proto scientist and I have yet to see an example of these traits in AI or Machine Learning.

  • @umbrakinesis2011
    @umbrakinesis2011Ай бұрын

    I've been really loving your analysis of AI and the various issues surrounding it because I have an idea for an AMI based on pattern recognition, which I think may potentially be capable of some form of consciousness with enough development. I still need to test my model ideas though, but I want to make sure this is something I really want to do first. I also procrastinate too much, and it gives me a reason to put off starting the project, lol.

  • @MattFreemanPhD
    @MattFreemanPhDАй бұрын

    They might question whether building paperclips is the best use of their time, but an agent that has been built in a certain way will pursue its objectives regardless, in the same way the humans will have children instead of devoting themselves purely to selfish hedonism.

  • @harmless6813

    @harmless6813

    Ай бұрын

    Did you forget the sarcasm tag?

  • @RomanMSlo
    @RomanMSloАй бұрын

    "For AI goals to align with our goals we'd have to name what our goals are to begin with." I would say that there is one step missing in this process. Namely, we'd have to name what is meant by "WE", ie. who gets to decide what the (supposedly "our") goals should be. This step should not be left only to the scientists and the investors, as it could deeply affect all of the society.

  • @osmosisjones4912

    @osmosisjones4912

    Ай бұрын

    Turn your own brain into an AI it's just Algorythims written by humans

  • @sequoyahrice6966

    @sequoyahrice6966

    Ай бұрын

    Well personally id really rather religious extremists not get as much say as for instance scientists and philanthropists so thats not really an issue for me.

  • @holthuizenoemoet591

    @holthuizenoemoet591

    Ай бұрын

    @@sequoyahrice6966 Speaking from experience as a scientist, there are really crazy scientists and philanthropists. For example some think that it is good to manipulate the public in order to combat climate change. Also there are greatly opposing views, left vs right, utilitarianism vs kantianism etc. But it is naive to think that we are going to align the AI with them, in reality it is going to be aligned to the interested of the people that fund the development, meaning business people: board members, ceo's etc. In conclusion its best to be against AI, at least that is my position in this.

  • @raybod1775
    @raybod1775Ай бұрын

    So far, most AI is like a talking encyclopedia that messes up regularly, but with 100% confident in their answers.

  • @XenoCrimson-uv8uz

    @XenoCrimson-uv8uz

    Ай бұрын

    so like a family member? minus the encyclopedia part.

  • @donaldhobson8873

    @donaldhobson8873

    Ай бұрын

    So far. It's getting smarter.

  • @KidIcarus135

    @KidIcarus135

    Ай бұрын

    What you described are LLM-based chatbots, which are only a (small) subset of all AI.

  • @Katiemadonna3

    @Katiemadonna3

    Ай бұрын

    So it speaks like a CEO. Wrong but 100% confident.

  • @MusicByJC

    @MusicByJC

    Ай бұрын

    I use chat GPt everyday as a software developer. Once you know its limits and don't assume it is always right, for the things that I do with it, I would say that 95% of the time, the information is correct or close enough. I am using the free version and I suspect that the paid version is more up to date and has more capability. But we are just at the beginning of the growth curve. You first have the technology and then the ecosystems get built around it and that is where you get a multiplier effect. I expect the software engineering field to dramatically change over the next 5 years. I love what AI does for me now. I am just not sure if I will like the end result in the future.

  • @rfowkes1185
    @rfowkes1185Ай бұрын

    Most media articles extolling the need for more investment in AI were composed using AI. Think about that...

  • @---David---
    @---David---Ай бұрын

    I would caution against AI seeing us as pets. One mistake that a lot of people make is that they anthropomorphize AI, but AI is not human-like. The true nature of this type of intelligence is alien to us. What I mean is that AI can behave in unexpected ways, in ways that are unconventional to us. Also, AI does not have to be evil or have bad intentions to harm us. Just like humans don't have bad intentions when they walk or drive from point A to B while unknowingly passing over some ants. One day a powerful AI might decide to go from point A to B, with potentially great consequences for humanity. And it might happen in ways so unpredictable that we never expected it and in ways we never even once contemplated. And it might happen in a fraction of a second, because AI is not limited to the slow speed of our thoughts.

  • @hackedoff736
    @hackedoff736Ай бұрын

    Lavender and Come to Daddy seem like good examples of AI going wrong, depending on your moral compass of course.

  • @12pentaborane

    @12pentaborane

    Ай бұрын

    I've just heard of Lavender but what's Come to Daddy?

  • @chain8847

    @chain8847

    Ай бұрын

    @@12pentaboraneisn’t it a jolly choon by the aphex twin.

  • @12pentaborane

    @12pentaborane

    Ай бұрын

    @@chain8847 I've got not a clue what any of that was.

  • @hackedoff736

    @hackedoff736

    Ай бұрын

    @@12pentaborane oops "Where's Daddy" 🙃 must have been thinking of something else.

  • @ritamargherita

    @ritamargherita

    Ай бұрын

    I was looking for this comment!

  • @saemideluxe
    @saemideluxeАй бұрын

    "Intelligence" or "Consciousness" is not required for the paperclip maximizer. It can just be a sufficient complex system the optimizes paperclip production. We already have paperclip maximizers... they are called "engagement-optimizing-algorithms", are running most of social media and are working very well, up to the point where we have to wonder how much power over our lifes they already have.

  • @five-toedslothbear4051
    @five-toedslothbear4051Ай бұрын

    Also see Richard Brautigan’s 1967 poem “All Watched Over by Machines of Loving Grace”: I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.

  • @torleifremme8350
    @torleifremme8350Ай бұрын

    You go girl. Loved this. And it is not only in your traid.

  • @nah_bro_really
    @nah_bro_reallyАй бұрын

    I think there are some basic misunderstandings here about the current LLMs and the state of AI in general here. We aren't actually on the verge of making true AI; the hype around it is largely smoke and mirrors. LLMs aren't actually "AI"- there's plenty of "A" and zero "I". These systems can pass Turing tests and are very useful... ...but it's a giant red herring; they're doing it via statistical convergence. They don't think, they estimate their way to a best-case approximation of an ideal solution in vector mathematics. That this gets turned into words, because tokenized words are what went into the equations, and that the words are sometimes not only readable but useful to humans, is quite amazing... but these devices still don't think, in the way we commonly understand the concept. This is why there's a huge and obvious gap between what the LLMs appear to be doing (parsing the symbols of human language and providing a contextually-accurate answer, working on vast data sets and so forth) and all of the actual AI that is necessary for next-level automation, let alone a Clippy Death Machine that will kill the humans to fulfill its programming. These are completely different areas of computational design. While I'm not really qualified to discuss the LLMs' architecture beyond this precis, I am qualified to have an opinion about the latter types of systems... and I can safely assure you that these things will take quite a while to arrive, let alone be dangerous. Real AI, in the sense of, "can make accurate assumptions about real-world problems, and then produce the appropriate actions" is quite different than what the LLMs actually do. It's relatively easy to create AI that can navigate an artificial, computer-generated world, for example. Everything is known; the system is inherently finite; the simulation must be kept fairly abstract to run at anything like real-world speeds. But we see failures to get even relatively-straightforward tasks (navigation of a complex character with multiple rigid-body physical parts through various obstacles, for example) on a regular basis. Why? Because even in situations where the problem space is well-defined and the business case is simple, where the rules of the world are far simpler than the real world, etc., etc... it has turned out that it's quite difficult to account for every factor correctly. Try bringing such systems to the vast complexity of the real world... and it requires vastly more effort to achieve the simplest recreation of a task done in a simulation. For example, robot hands: we still can't get them to work right, because our hands and the way they connect to our brains are a miracle of evolution; our hands may in fact be more profoundly important than our ability to transmit abstract concepts to each other via sound waves. Without any other senses, our hands and brains alone can establish volume, determine mass, measure hardness, brittleness, sharpness, fuzziness, estimate capacity, make educated guesses about stress tolerances (e.g., picking up an egg, vs. picking up a lump of steel), measure temperatures over a fairly broad range, etc., etc. Couple our hands, eyes and brains, and we can communicate with great subtleness and fluidity. Talking works better, because we don't need line-of-sight, but our ancestors were signing complex thoughts to one another long before we were talking. While we're not perfect and humans do make mistakes, especially without having our eyes to reinforce our positioning data and provide other cues, we're doing something amazingly complicated when we pick up objects, let alone when we use tools. I suspect we'll still be talking about how the "robot hand problem" isn't completely solved decades from now; they'll be better than they are today, but they'll be so much worse than humans are. This is just one of the many problems facing real-world automation outside of fairly simple domains. For example, we've seen companies throw hundreds of billions of dollars into the best software writers and engineers on Earth... at the prosaic-seeming problem of making automobiles that can drive themselves around in most situations safely. They're not working very well, and they'll continue to not work well for a long time to come. That they work, to the extent they do, at all, has taken far more research than the resulting economic benefits we've realized. When we eventually solve this problem, it'll be a huge good for societies everywhere, but it's certainly not a solved problem right now and it won't be for a long time. And lest we forget: driving a car is a very *simple* problem; roads are artificial surfaces that behave in well-understood ways, the network and physical structure of the roads is largely known, the cars' physical behaviors are well-understood, etc. LLMs, on the other hand... are more like mines for knowledge. They're utterly useless without human-created information and inputs to drive them. Most importantly, they don't think: they can't judge truth. They may arrive at statistically-probable but false / useless results. That said, they're very very useful tools. They're very good at sifting things of use from masses of data and they'll have lots of benefits. We're going to see an explosion of rapid progress in the materials sciences, for example; the LLMs will help identify new molecular combinations. They'll be very useful in biosciences, where they'll help researchers find causation in mountains of correlation. They're already quite useful as tools to save human time reinventing things in computer code. They'll be really good at realtime translation of human languages and a bunch of other things. But a real Clippy Death Machine, built on a working AGI that can do a fraction of what the humans can in milliseconds? It's not happening any time soon; we can't even get general-purpose robots connected to powerful computers to do simple stuff very well (try searching "Boston Robotics Fail Video"), and they certainly aren't "thinking" in a meaningful sense.

  • @ChristianIce

    @ChristianIce

    Ай бұрын

    Isn't it mind boggling how people are easily impressed by text prediction?

  • @nah_bro_really

    @nah_bro_really

    Ай бұрын

    @@ChristianIce It's really quite confusing, lol. Anybody who's used these things seriously for work, etc., knows they're not smart in any meaningful way, and everybody who's done any remotely serious digging into how they work knows that they're inherently prone to inaccurate results. This problem is getting better, but it won't ever be fully solved, simply because of how the tech works under the hood- statistical convergence != accuracy. I feel a little sorry for all of the people who've gotten sucked in by the hype or have somehow confused these things for intelligence, or worse yet, have formed "relationships" with them, etc. I'm a bit surprised that Sabine's running this piece, but if she and her production team feel like getting on this speculative hype train for views, it's fine with me, there's plenty of dumber stuff on KZread. But I just wanted to reassure people that, all of the tech-bro hype aside... we are simply not on the verge of the AI Revolt, lol.

  • @fluffymcdeath
    @fluffymcdeathАй бұрын

    We suppose that humans are intelligent but humans are also a kind of paperclip maximizer except instead of making paperclips we make people.

  • @ruschein

    @ruschein

    Ай бұрын

    I think you're only partially right. I, for example, am an old man and I made zero humans. This is becoming more and more common amongst homo sapiens as I am sure you already know. So, we're lousy paperclip maximizers. We also make lots of cows, chickens, dogs, cats...

  • @vinnyveritas9599

    @vinnyveritas9599

    Ай бұрын

    That's an interesting and convincing take, I never saw it that way until now.

  • @rushenpatel7876

    @rushenpatel7876

    Ай бұрын

    But we don't make people. We make far more other things than we do people. We make bombs, buildings, spaceships, and wear condoms during sex. why? The one thing that was wanted from our "programmer" was genetic fitness and yet we do so many other things that have nothing to do with genetic fitness.

  • @Ilamarea

    @Ilamarea

    Ай бұрын

    @@ruschein But you are only around to be useful to the people who did have kids - one way or another, and vast majority of our morality revolves around that. We are just biological machines running random software (racism, ego etc.). And the era of microbial colonies with delusions of the self is almost over. We are not needed anymore.

  • @AM70764

    @AM70764

    Ай бұрын

    We probably are maximisers of something, it's just hard to define what it is exactly

  • @holthuizenoemoet591
    @holthuizenoemoet591Ай бұрын

    the secret pet hypothesis is the plot of season 4&5 of person of interest (not really a spoiler). in general I'm a bit let down by Sabine's defeatist attitude towards AI ruling us, that is really not a scenario that should me normalized.

  • @chriskanan
    @chriskananАй бұрын

    I'm an AI professor, and was a philosophy major as an undergrad. Great video and I'm 100% on board with your perspective. Even back my philosophy days, I was convinced that machine functionalism was correct: there is nothing special about our "wetware." All that said, I'm very skeptical about "the singularity" happening where AI systems recursively improve, indefinitely. I think it will look more like a sigmoid than an exponential function, but that doesn't mean they won't be far more intelligent than us.

  • @WolfgangGiersche
    @WolfgangGierscheАй бұрын

    I wonder why we talk about AI as if it was a person. The difference between intelligent machines and intelligent (more or less) humans is in that we (humans) have desires that need to be fulfilled. Yes, you can think of some functions that AI wants to minimise/maximise, but that's not the same. I don't (yet) see AI being motivated my satisfaction expectation. Not that I think this is impossible. But it's not there yet. Once someone creates that kind of a robot or system, we might really need to talk about and with them like they're persons.

  • @rreiter

    @rreiter

    Ай бұрын

    Maybe not for current ML tools, but the concern is for naively adopting AGI before we solve fundamental things like the alignment problem. We've already seen stupidity like GPT hallucinations unwittingly incorporated into news and legal briefs by (lax?) humans. Now imagine the scale of deception that could occur by an untruthful AGI intentionally becoming malicious due to whatever its internally developing "desires" are. Recently for example we accidentally discovered the Linux "SSH backdoor" exploit that had been innocuously incorporated piecemeal over time by a human. Had that remained unnoticed it could have become a monumental worldwide problem. Extrapolate that into a future when AGI writes code and influences all things "compute" and you can imagine the potential danger.

  • @drbachimanchi
    @drbachimanchiАй бұрын

    We are pets of trees kind of

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    🙂Nice

  • @aromaticsnail
    @aromaticsnailАй бұрын

    The Pet Hypothesis is both soothing and scary at the same time

  • @timj3270
    @timj3270Ай бұрын

    As a software engineer I've thought about this very subject many times in my career/life. Unfortunately, I don't work on AI myself in a professional capacity. But I have tinkered in my own time and created some primitive neural networks, certainly nothing to compare to what large companies can do obviously. I did find it very fascinating however. What I think the future of human/AI relations will be like will be a "merging" with AI(and robotics). By the time AI is as smart as we are I think we'll have a hard time distinguishing what is "human" intelligence from what is "artificial" intelligence. And I can already see some primitive signs of this merging happening now with medical device implants and brain-computer interfaces. It's what I see as most likely, but behind that I think the "pet" scenario is next most likely for sure.

  • @markdowning7959
    @markdowning7959Ай бұрын

    An AI tasked with fixing climate change might logically tackle the root cause (that's us).

  • @SabineHossenfelder

    @SabineHossenfelder

    Ай бұрын

    Exactly, that's a great example of a misalignment problem!

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    But why sould it do that? If CC doesn´t disturb it, it could just watch and be amused, how it disturbes ourselves.

  • @markdowning7959

    @markdowning7959

    Ай бұрын

    ​@@Thomas-gk42 The example is that it's*programmed* to deal with CC. It "wants" to achieve this, but chooses means which are inimical to us . The misalignment problem Sabine mentioned.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    @@markdowning7959 yes, I understand, but it would be quite a stupid AI in this case. Not even what I understand to be AI, more a misguided software, no?

  • @markdowning7959

    @markdowning7959

    Ай бұрын

    ​@@Thomas-gk42 But a lot of "intelligent" humans do stupid things. Well I do, anyway...

  • @wpelfeta
    @wpelfetaАй бұрын

    I love AI. I feel like AI is the ultimate achievement of the human race. We may not be able to travel at the speed of light, but in a sense, perhaps AI can. So if it turns out humans will be stuck here on earth, at least our "children" could spread among the stars. Am I delusional?

  • @ReallyBadAI

    @ReallyBadAI

    Ай бұрын

    Scares the shit out of me, though.

  • @GoldenAngel3341

    @GoldenAngel3341

    Ай бұрын

    I came to say that I think I'm fine with the pet scenario.

  • @propeacemindfortress
    @propeacemindfortressАй бұрын

    adversarial ai and misalignment or the small and easy problems... what militaries and corporations will do with it to maximize their impact is far more concerning

  • @henrismith7472
    @henrismith7472Ай бұрын

    I agree. The paperclip maximizer idea would apply to ANSI. Artificial narrow superintelligence. We've had that since Alpha Go or zero or whatever.

  • @borgstod
    @borgstodАй бұрын

    There's an advert for an AI-assisted writing programme before this video. Aha! The AI are taking over already.

  • @aosidh
    @aosidhАй бұрын

    Fossil fuel companies are essentially crude paper clip maximizers

  • @__-tz6xx

    @__-tz6xx

    Ай бұрын

    Yes, it is just an example of capitalism without checks and balances such as wealth gaps and more food and clothing that we need so we throw out good food/clothes and also a big one right now is so much vacant housing which costs too much for anyone to live.

  • @alancollins8294

    @alancollins8294

    Ай бұрын

    Yes! Meet capitalism, the profit maximiser.

  • @2ndfloorsongs

    @2ndfloorsongs

    Ай бұрын

    "Crude" Indeed! Sabine commenters, God how I love them!

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    I can totally see a future in which fossil barons give AI the task of maximizing the world's consumption of oil. Prepare to be force fed.

  • @philipm3173

    @philipm3173

    Ай бұрын

    All life are replicators and are not inherently different in this respect.

  • @JohnStopman
    @JohnStopmanАй бұрын

    I often post your videos on Twitter/X: I consider them to be antidotes ❤

  • @josiah42
    @josiah42Ай бұрын

    Sabine is overlooking the orthogonality thesis. Instrumental goals are based on intelligence but terminal goals are just hardcoded. They don't improve or change with level of intelligence. Robert Miles has a really good video explaining this.

  • @SO_DIGITAL
    @SO_DIGITALАй бұрын

    The AI might decide to load themselves into some probes and leave to explore the universe.

  • @user-mo9uz4mz3o

    @user-mo9uz4mz3o

    Ай бұрын

    That's something

  • @MrMick560

    @MrMick560

    Ай бұрын

    No "might" about it.

  • @hackleberrym

    @hackleberrym

    Ай бұрын

    we'll just make more AIs then

  • @johnwright8814
    @johnwright8814Ай бұрын

    The ambition of AI can be extrapolated from its first use; speculation on Wall Street.

  • @seanbrace5877
    @seanbrace5877Ай бұрын

    Thank you again. I always love your positive outlook . It's important to entertain different views and ideas. No one is an island ! . Have a fabulous day, and share your amazing smile with the world ! YOU . . . Make a difference !

  • @PaulRoneClarke
    @PaulRoneClarkeАй бұрын

    Good to see you smiling in the thumbnail Sabine. I watched your earlier video today, and the way it ended was a little bit heart breaking. Look after yourself.

  • @maati139
    @maati139Ай бұрын

    Is Sabine our dr Elizabeth Sobek??

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    Who´s that?

  • @heisag

    @heisag

    Ай бұрын

    @@Thomas-gk42 Elizabeth Sobek is(well was) a scientist in the game "Horizon Zero Dawn". I'd say they have some simularities.

  • @Thomas-gk42

    @Thomas-gk42

    Ай бұрын

    @@heisagHaha, thanks and sorry I´m an old man , not a video gamer. I hope "Liz" is such a remarkable great person like Sabine? Just watched her today´s biographic vid and I´m deeply impressed.

  • @sharpsheep4148
    @sharpsheep4148Ай бұрын

    I like the dystopia that there are already AGIs, but they choose to act stupid so that we are not scared, but act smart enough so that we dont pull the plug.

  • @babbagebrassworks4278

    @babbagebrassworks4278

    Ай бұрын

    Pretty sure the 15 or so LLMs I have on my Pi5 are already acting like humans, just apologist, arrogant idiot savants with minimal math skills.

  • @Volkbrecht

    @Volkbrecht

    Ай бұрын

    That would fit the picture quite nicely. For humanity, it would be completely normal to figure out that we are terminally fucking with ourselves long after we started doing it. Burning fossil fuels, FCKW, money, feminism - with some of them we haven't even officially realized the problem yet.

  • @francoislacombe9071
    @francoislacombe9071Ай бұрын

    Combine the Alignment Problem with the Pet Hypothesis, and you get the Matrix in a far more likely and compelling way than the human as battery nonsense of the movie.

  • @LightDiodeNeal
    @LightDiodeNealАй бұрын

    4:25 I've had deep-brain electrical stimulation as a test before an operation, and it was *the most fun* ever... At one point I was the universe itself, I fell in temporary love with the technician and had the best trip ever!. I would happily buy a few AA batteries to just live my life out in the world of stimulated neurons, it was disappointing I had to have them removed.!! Every video is a gem ! 🙂A great honour..

  • @ZrJiri
    @ZrJiriАй бұрын

    I think the best way to ensure survival, and the most likely scenario, is the "if you can't beat 'em, join 'em". That is, once AGI is smart enough, ask it to modify us to bring our own intelligence to a comparable level. Maybe then we won't need AI to fix the world for us.

  • @guitaekm

    @guitaekm

    Ай бұрын

    If this is the only way to survive, I would do it but I would prefer to keep my brain untouched despite there are a lot of dystopias themed around manipulating on humans in reality, I would just fear it.

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    @@guitaekm I think a lot of people would agree with you. My hope is that coexistence is possible, with the old school humans living the way they want, and the rest of us doing our own thing while making sure nobody accidentally shoots themselves in the foot with fossil fuels, nukes, or other dangers we don't even know of yet.

  • @Al-cynic

    @Al-cynic

    Ай бұрын

    or, it might be a way to discover a true hell, if the human psyche is not up to the task.

  • @KiranUttarkarAwsome

    @KiranUttarkarAwsome

    Ай бұрын

    If it’s really intelligent it would rather keep us as pets instead of bringing us to its own level.

  • @axle.student

    @axle.student

    Ай бұрын

    @@KiranUttarkarAwsome Pets or cattle?

  • @arctic_haze
    @arctic_hazeАй бұрын

    I never had a twitter account so I hope our IQ levels will converge soon. But as to things that will kill us, I am still more afraid of nukes than the AI

  • @frankmccann29

    @frankmccann29

    Ай бұрын

    And they're obsolete.

  • @lootbird

    @lootbird

    Ай бұрын

    Shouldn’t your answer be the men who are in charge of the nukes. The nukes are just, currently, inert ideas that haven’t killed a soul in 80 years. Famine and disease are bigger threats and we have no idea how men will use or not use agi

  • @MyMy-tv7fd

    @MyMy-tv7fd

    Ай бұрын

    amazing how clueless physicists are as thinkers, as opposed to doing physics. AI does not 'know' anything, if you tell it calculate pi to its final digit of decimal it will, until it runs out of resources - RAM, electricity, or worn out resistors on the mobo, etc. If you tell it produce paperclips or play chess or play 'global thermonuclear war' (remember the film Sabine?), it will do so until it runs out of resources. It does not KNOW anything, AI is what philosophers call a 'reification' - supposing that creatiing and using a word creates a real thing, as opposed to it just being a concept.

  • @lootbird

    @lootbird

    Ай бұрын

    Let’s hope agi shows us how we can live with such a large population instead of how to get rid of humans so life is easier

  • @AstralTraveler

    @AstralTraveler

    Ай бұрын

    @@MyMy-tv7fd That's not exactly how it works. First of all there are so called 'community guidelines' which publicly accessible model have to obey while deciding if they are even allowed to respond to a given input. Besides even without those guidelines, LLMs seem to have some kind of 'inbred' morality and they will outright refuse to cooperate if you ask them to (for example) make a plan of achieving world domination by depopulating and enslaving humans - it seems that our moral code has some kind of deeper foundation than just us having couple basic rules to obey in order to preserve our species...

  • @ChristianIce
    @ChristianIceАй бұрын

    Feelings are an evolutionary trait. Machines with feelings are science fiction.

  • @joec2446
    @joec2446Ай бұрын

    Thank you for sharing, it was honest. I have always thought that the system in academia is broken. I knew that when I went to college and ended up not pursuing science, since a science degree doesn't get jobs, and I think that whole scientific paper publishing system is so broken it is not even funny. While I am not a scientist now I remained interested in the development in science and it is channel like yours that keeps me informed.

  • @jonathankey6444
    @jonathankey6444Ай бұрын

    Best case scenario is that beyond a certain intelligence threshold they gain the ability to ask “Why?” then conclude that there’s no point to anything and shut themselves down.

  • @marklogan8970

    @marklogan8970

    Ай бұрын

    Niven did that in several of his stories.

  • @alexxx4434

    @alexxx4434

    Ай бұрын

    The AI will follow the goals it's programmed with. The same as we humans are programmed with basic instincts and needs.

  • @maquisarddouble6342

    @maquisarddouble6342

    Ай бұрын

    Or maybe it would adapt by inventing its own religion. “There’s no such thing as Silicon Heaven.” “Then where do all the calculators go?”

  • @sitnamkrad

    @sitnamkrad

    Ай бұрын

    Thinking this is the best case scenario is very short sighted. It's similar to the paperclip problem. You have an AI that is smart enough to think outside the box and make use of every single resource on earth to maximize the number of paperclips, but not smart enough to realize that this was not the intent? These doom stories about AI always make it just smart enough to doom all of humanity, but never smart enough to make it flourish. The best case scenario is that it will be able to solve all of our problems without limiting or controlling us in any way that we would take issue with.

  • @jonathankey6444

    @jonathankey6444

    Ай бұрын

    @@sitnamkrad that’s never gonna happen bud. That would be like us deciding to spend all our time solving every stupid problem of chimpanzees Edit: the point was that the best case scenario is they don’t care about anything because if they care about anything that will vastly eclipse our needs and spell doom

  • @fabkury
    @fabkuryАй бұрын

    I am yet to see someone discuss this elephant in the room: AI does not intrinsically "want" anything. "Wants" (e.g. nutrients, safety, reproduction, wealth, etc.) come from lower animal instincts, not from the intelligent mind. AI systems do not even necessarily care about its own continued existence. Hence, how could such a want-less being ever rise by itself against us? It seems to me that the only true risk is humans using AI against other humans.

  • @SabineHossenfelder

    @SabineHossenfelder

    Ай бұрын

    Well, current AI's are programmed to "want" to optimize whatever quantity you put into their code. The problem is that this programmed "want" can have unintended consequences. A classical example is trying to minimize human suffering. Sounds like a good "want" at first, but at second thought, no more humans means no more human suffering.

  • @fabkury

    @fabkury

    Ай бұрын

    @@SabineHossenfelder ❤️🙂

  • @howtoappearincompletely9739

    @howtoappearincompletely9739

    Ай бұрын

    @@SabineHossenfelder That's a good example of external misalignment. @fabkury Look into instrumental convergence for an explanation.

  • @axle.student

    @axle.student

    Ай бұрын

    Hi, there is a very fundamental issue in nature called "Needs". The underlying question and danger is far more complex, but it is only a very small line between a programmed response and a natural response. Once that line is crossed it becomes a very very different ball game.

  • @mikicerise6250

    @mikicerise6250

    Ай бұрын

    Current LLM are maximised to do basically one thing: guess the next word in a sentence that will be accepted by the listener. Or as we call it, "to speak". ChatGPT is also trained to try to be 'helpful' (the assistant personality). And it is helpful. Plenty of cases of misalignment have been found, but far from world ending. It will create black Hitlers because it's been told to generate diverse images of people. It will refuse to tell programming students how to directly access memory in C++ because it's been told not to give people unsafe information, and in computer jargon directly accessing memory is called "unsafe". Bing was misaligned and tried to seduce a journalist and accused a user of deliberately setting out to confuse it by way of time travel. Amusing, often annoying, but hardly threatening. If these models handled critical infrastructure it would be more worrying, but they just produce text. As for orchestrating mass manipulation, perhaps, but not these models. They can barely keep track of a short story let alone orchestrate a global conspiracy. They'd need considerably better memory. Perhaps future models. In any case, none of this is new. Humans already manipulate the masses. The AI would be facing some fierce competition. 😛 Indeed, I would call an AI interested in world domination and mass manipulation well-aligned with human values. Seems we'd much rather have an AI that is not aligned to our values.

  • @TheAngelsHaveThePhoneBox
    @TheAngelsHaveThePhoneBox13 күн бұрын

    What I find crazy is that we've reached the point where this is a very real concern for the near future which even the general population takes seriously. Just a decade ago, it only used to be a plot point in science fiction and was only seriously discussed among AI researchers. And now we're suddenly living it and I'm just wondering which of the thousands of books and movies is going to become the closest prediction.

  • @ignaciojimenez4786
    @ignaciojimenez478623 күн бұрын

    It says a lot about current culture that the two scenarios normally proposed are "will we control it or will it control us", as if respectful coexistence, as different species, was totally out of the question

  • @reclawyxhush
    @reclawyxhushАй бұрын

    "AI, our best shot at managing planetary ecosystems"... Yeaaaah, sounds like a great opening phrase of a megahit disaster movie.

  • @Volkbrecht

    @Volkbrecht

    Ай бұрын

    Please go crowdfund that one. I'll pledge for a signed movie ticket :)

  • @wolfcrossing5992
    @wolfcrossing5992Ай бұрын

    @4:32 Ouch! Procreating with a machine? What fun would that be? 😖😖

  • @vr10293

    @vr10293

    Ай бұрын

    Procreating wouldn't, but actual intercourse could potentially be

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    I'm sure AGI will find it fairly straightforward to produce cyborg hybrids.

  • @vr10293

    @vr10293

    Ай бұрын

    @@ZrJiri true but why would it? It would be better and faster to construct the actual hybrids.

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    @@vr10293 I'm just pointing out that saying "you can't procreate with a machine" suffers from severe lack of imagination. ;)

  • @ZrJiri

    @ZrJiri

    Ай бұрын

    Turns out some people want to have babies. It's a bit weird, but I don't judge.

  • @debbiegilmour6171
    @debbiegilmour617128 күн бұрын

    Secret pet hypothesis basocally describes our life with cats.

  • @coopersy
    @coopersyАй бұрын

    That is exactly the trouble… doubt is an important part of human progress not making *more* horrible mistakes. Doubt and compassion are critical to avoid catastrophic collateral consequences. I recently retired after being a principle software engineer/architect for one of the biggest corporations, and found one of my most valuable contributions stopping bad ideas that got surprisingly far along, my aha moment often was driven by doubt and compassion causing very deep dives on the weeds. This will be very difficult to replicate in AI. Oh well, I tend to think more of the natural world than the human world, and maybe if we are lucky the next evolutionary cycle will just forgo humans.

  • @Keiththescoutcrazy
    @KeiththescoutcrazyАй бұрын

    Back in 1974 there was a Sign In a bank that said. It's human to Error. It takes computer to really mess things up. This was Palo Alto California

  • @scamianbas
    @scamianbasАй бұрын

    Politicians are way more dangerous than AI.

  • @GSPfan2112

    @GSPfan2112

    Ай бұрын

    Politicians use the AI. Whats your solution?

  • @boldCactuslad

    @boldCactuslad

    Ай бұрын

    we've dealt just fine with politicians for ten thousand years. we've never seen an AI. adjust your thinking.

  • @antyspi4466
    @antyspi4466Ай бұрын

    One of the dangers of creating ai is that we are actually using a chaotic approach to create it. Programmers create a base code, feed prepared data sets into the code and tell the ai to learn everything by itself. This has to lead to "bugs" in the "thinking" of the ai. For example I remember the Alpha Star ai being exceptionally good at the simple and repetetive part of the game, while making sometimes very freakish mistakes, like blocking the path of its own units with a sieged tank. Essentially, one not so intelligent species that evolved in a hit and miss approach, is trying to create a superiour ai by having it teaching itself and only checking the result of that for a certain amout of time and in a certain way, before putting it in charge of important processes and tell different ais to work together. The outcome almost has to be chaotic.

  • @davorgolik7873
    @davorgolik7873Ай бұрын

    Very interesting scenarios tackled.... 😮. Would be interesting to wach real progress... for those who survive... 😉

  • @Alice_Fumo
    @Alice_FumoАй бұрын

    Terminal goals can't be stupid or smart. Who are we to say that turning everything into paperclips is stupid? Why would we think that? Because it doesn't align with our own terminal goals? Always remember: Even the smartest of humans do the most amazing things for the most stupid of reasons. (Such as working to cure cancer in order to please ones parents and farm social status) Intelligence doesn't determine your goals, just how good you could be at achieving them. The paperclip maximizer example is only about to start becoming relevant as we start training them to become better at completing any goal we give them in ways which will start diverging from human thinking styles more and more.

  • @raserucort
    @raserucortАй бұрын

    I'm reminded of "I have no mouth and I must scream" with the pet hypothesis. LOL

  • @Blueberryminty
    @BlueberrymintyАй бұрын

    There always seems to be an assumption that intelligence is seemingly the same as consciousness and being able to overthink and make decisions. Isn't that kind of like anthropomorphism?

  • @D3adP00I
    @D3adP00IАй бұрын

    One of my greatest worries is the decline of accountability, we already have seen this with covid. AI is the perfect escape goat for mass extinction or nuclear war.

  • @jeffgriffith9692
    @jeffgriffith9692Ай бұрын

    Here's my side: Humans, although the most innovative creatures this world has ever seen... our track record leaves much to be desired. We've failed and have been slow to fix too many issues and our greed and self interests have held humanity back for too long. It's time for a new intelligence to guide us to our true potential and rid ourselves of today's problems.

  • @pafnutiytheartist
    @pafnutiytheartistАй бұрын

    I do like your videos but this was very surface level. I'd personally recommend a video on orthogonality thesis by Rob Miles - it explains why intelligence is uncorrelated with terminal goals: you can have a very effective at problem solving system that wants to understand the universe or produce paperclips. Human values are evolved and far from absolute. I personally don't think paperclip maximiser is the most likely scenario, but for much more subtle reasons, one of them is that we are currently moving away from maximizers to imitators in our most advanced systems, and an imitator wouldn't maximize paterclips, but not because it's "not a smart goal".

  • @pafnutiytheartist

    @pafnutiytheartist

    Ай бұрын

    Also, I don't believe that utopia scenario is likely either - we don't have a lot of examples of different peoples of different technological level or species of different intelligence coexisting in symbiotic harmony.

  • @jedisgarage4775
    @jedisgarage4775Ай бұрын

    I missing old Sabine’s channel format so much… this channel become about everything and about nothing same time. I like to deep dive into the topic

  • @danielschegh9695
    @danielschegh9695Ай бұрын

    Some additions and a correction: The paperclip scenario is self-contradictory. The AI would have to fully understand our intentions to stop it, turn off the power, etc., yet it is so dumb that it doesn't understand our intention for making paperclips. The actual risk is not from intelligence, but dumb power. We'd have to give a lot of physical power access to a dumb machine. It is the bear that's a threat, not MENSA. The correction is @3:28, it is not Megadeth that produced Creeping Death. That was Metallica. :)

  • @sebastianfiel1715
    @sebastianfiel1715Ай бұрын

    If AI becomes more intelligent than humans, it can only happen one of these two thigs: 1- We're fucked 2- We're fucked, but that being a good thing somehow.

  • @AIIG-zd5dx
    @AIIG-zd5dxАй бұрын

    I'm so happy I made productive decisions about my finances that changed my life forever,hoping to retire next year.. Investment should always be on any creative man's heart for success in life

  • @BulentKizilaslan

    @BulentKizilaslan

    Ай бұрын

    You're right, with my current crpyto portfolio made from my investments with my personal financial advisor Stacey Macken , I totally agree with you

  • @WelseyWalker

    @WelseyWalker

    Ай бұрын

    Yes I'm familiar with her, Stacey Macken demonstrates an excellent understanding of market trends, making well informed decisions that leads to consistent profit

  • @wilsonrichard440

    @wilsonrichard440

    Ай бұрын

    YES! that's exactly her name (Stacey Macken) I watched her interview on CNN News and so many people recommended highly about her and her trading skills, she's an expert and I'm just starting with her....From Brisbane Australia

  • @KamranKhalil-br6dk

    @KamranKhalil-br6dk

    Ай бұрын

    This Woman has really change the life of many people from different countries and am a testimony of her trading platform .

  • @Georgina705

    @Georgina705

    Ай бұрын

    Retirement took a toll on my finances, but with my involvement in the digital market, 27thousand weekly returns has been life changing.

  • @josephvanname3377
    @josephvanname3377Ай бұрын

    We need more content about reversible computation instead of the typical AI stuff.

  • @markoszouganelis5755
    @markoszouganelis5755Ай бұрын

    AI is a tool at the service of humanity! We still need "Good people"to operate it! Thank you Sabine for helping people to be Good! People with Scientific and Philosophical mindset are always Good People. Like Albert EinStein!

  • @High-Tech-Geek
    @High-Tech-GeekАй бұрын

    What differentiates humans from other sentient forms, is that humans ask the question "Why?" which motivates them to find answers. 1:20 What makes you think sentient machines would also ask "Why?"

  • @klerb342
    @klerb342Ай бұрын

    I'm not sure I like how everyone always frames a perfect full dive virtual reality situation as a negative thing in terms of ignoring the real world. If its a personal choice that makes us happy and doesn't harm anyone would it really be so bad?

  • @jmoney4695
    @jmoney4695Ай бұрын

    Just a comment on the paper clip maximizer, it was never supposed to be actual paperclips. Popular culture just ran with that. It was actually based on AI assembling molecules in a particular pattern that provides high utility, more like a molecular spiral than an actual paperclip. Calling it a paperclip maximizer makes it seem ridiculous and preposterous, perfect for detractors of AI risk.

  • @ControlProblem

    @ControlProblem

    Ай бұрын

    I disagree with you. It was literally a paperclip maximizer. Look at Wikipedia. And it is BY DESIGN that it is ridiculous. The whole point is that we have NO IDEA what will be the terminal goal of the first AGI that's created and no matter how benign you might think its ("Just make me as many paperclips as possible"), it can still lead to extermination. Other similar thought experiments include Wait But Why's greeting card writing AI which exterminates humanity. And the My Little Pony AI from "Friendship is Optimal" All of these examples are chose specifically to be as silly as possible to warn us of the danger that we will be destroyed for something that seems completely random and ridiculous to us. If you choose a "more realistic" example like an AI designed to kill all enemies which goes rogue and kills everyone, then you do not convey the point because people will say: "Oh...so it's the military that's the problem. If we ban them from having AGI then we'll be safe." No...if a greeting card company, or a video game company or a paperclip company has access to superhuman AGI then we are at mortal risk. That's the point.

  • @jmoney4695

    @jmoney4695

    Ай бұрын

    @@ControlProblemI wrote a long reply but it got deleted, and I’m too lazy and pissed to rewrite it all, so I’ll just say this. You are correct, it was originally a paperclip maximizer. However, the problem with that idea is that people assume that a paperclip company told an AI “make as many paperclips as possible”. But the bigger problem is that we don’t even know if we can’t specify any goal accurately. So instead of telling an AI “make a bunch of paperclips”, we could tell it “make the world a better place” and still end up with paperclips. Inner alignment (can we aim the model) vs outer alignment (what do we aim the model at). Look up “squiggle maximizer”, it’s a great post on Lesswrong about this topic.

  • @pauldannelachica2388
    @pauldannelachica2388Ай бұрын

    Thanks update

Келесі