Will AI Destroy Us? - AI Virtual Roundtable

Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".
This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.
It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.
Pre-order my book:
"The End of Race Politics: Arguments for a Colorblind America" - bit.ly/48VUw17
FOLLOW COLEMAN:
Check out my Album: AMOR FATI - bit.ly//AmorFatiAlbum
Substack - colemanhughes.substack.com
Join the Unfiltered Community - bit.ly/3B1GAlS
KZread - bit.ly/38kzium
Twitter - bit.ly/2rbAJue
Facebook - bit.ly/2LiAXH3
Instagram - bit.ly/2SDGo6o
Podcast -bit.ly/3oQvNUL
Website - colemanhughes.org
Chapters:
00:00:00 Intro
00:03:45 The Uncertainty Of Chat GPT's Potential Threats
00:05:50 The Need To Understand And Align Machine Values
00:09:01 What Does AI Want In The Future?
00:14:44 Universal Threat Of Super Intelligence: A Global Concern
00:17:13 Inadequacy Of Bombing Data Centers And The Pace Of Technological Advancements
00:20:48 Current Machines Lack General Intelligence
00:25:46 Leveraging Ai As A Partner For Complex Tasks
00:29:46 Improving Gp T's Knowledge Gap: From GPT3 To GPT4
00:32:00 The Unseen Brilliance Of Artificial Intelligence
00:37:27 Introducing A Continuum Spectrum Of Artificial General Intelligence
00:39:54 The Possibility Of Smarter Future Ai: Surprising Or Expected?
00:42:19 The Importance Of Superintelligence's Intentions And Potential Threat To Humanity
00:47:20 The Evolution Of Optimism And Cynicism In Science
00:52:17 The Importance Of Getting It Right The First Time
00:53:53 Concerns Over Artificial Intelligence And Its Potential Threat To Humanity
00:57:39 Importance Of Global Coordination For Addressing Concerns About Super Intelligence
00:59:04 Exploring The Potential Of Super Intelligent Ai For Human Happiness
01:03:32 The Potential Of AI To Solve Humanity's Problems
01:05:45 The Uncertain Impact Of Gp T Four
01:08:30 The Future Of Utility And The Dangers Ahead
01:15:04 The Challenge Of Internalized Constraints And Jailbreaking
01:19:04 The Need For Diverse Approaches In Alignment Theory
01:23:47 The Importance Of Legible Warning Bills And Capability Evaluations
01:26:31 Exploring Hypotheses And Constraints For Robot Behavior
01:27:44 Lack Of Will And Obsession With Ll Ms Hinders Progress In Street Light Installation
01:33:20 The Challenges Of Developing Knowledge About The Alignment Problem
#ConversationswithColeman #CWC #ColemanHughes #Podcast #Politics #society #Colemanunfiltered #Unfiltered #Music #Philosophy #BlackCulture #Intellectual #podcasting #podcastersofinstagram #KZread #podcastlife #music #youtube #radio #comedy #podcastshow #spotifypodcast #newpodcast #interview #motivation #art #covid #history #republicans #blacklivesmatter #follow #libertarian #art #socialism #communism #democracy #woke #wokepolitics #media #chatgpt #AI #EliezerYudkowsky #GaryMarcus #ScottAaronson

Пікірлер: 560

  • @markupton1417
    @markupton14175 ай бұрын

    Everyone I've seen debate Yudkowsky agrees with enough of what Big Yud says to COMPLETELY justify stopping development until alignment is achieved and yet...they ALL imagine the most optimistic outcomes imaginable. It's an almost psychotic position of, "we need to sliw down, but we shouldn't rush into slowing down."

  • @Jannette-mw7fg

    @Jannette-mw7fg

    3 ай бұрын

    So true!

  • @christianjon8064

    @christianjon8064

    2 ай бұрын

    They’re a psychotic death cult that’s demonically possessed

  • @VoloBonja

    @VoloBonja

    20 күн бұрын

    Gary Marcus didn’t agree with him. Also he’s for controlling and legislation. Your comment is misleading in the worst way, you missed the whole debate? Or listened to Yudkowsky only?

  • @Frohicky1
    @Frohicky19 ай бұрын

    The insistence that danger requires malice is Disney Thinking.

  • @christianjon8064

    @christianjon8064

    2 ай бұрын

    It’s the lack of caring that’s all it takes

  • @teedamartoccia6075
    @teedamartoccia60757 ай бұрын

    Thank you Eliezer for sharing your concerns.

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider9 ай бұрын

    I see Eliezer, I click

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    What? So you think Eliezer is correct and we should nuke humanity back into the dark ages to delay, not stop the development of AI?

  • @markupton1417

    @markupton1417

    5 ай бұрын

    Same!

  • @markupton1417

    @markupton1417

    5 ай бұрын

    ​@@MusingsFromTheJohn00you weren't asking me...but yes. At least that would give us more time for alignment.

  • @guilhermehx7159

    @guilhermehx7159

    5 ай бұрын

    Me too!!!

  • @Jannette-mw7fg

    @Jannette-mw7fg

    3 ай бұрын

    @@MusingsFromTheJohn00 Probably China and Russia wil understand the dangers of A.I. and the change the USA will get there first, so they might be ok with a ban if the USA also stops. China does not wants its people to have A.I. {from open A.I. from the USA} to get out of control from CCCP.....They will not risk a nuclear war for that I think. But everything about AI {also the stopping it, as you said} is a BIG danger! It wil destroy humanity one way or the other....

  • @ElSeanoF
    @ElSeanoF9 ай бұрын

    I've seen a fair few interviews with Eliezer & it blows my mind how many super intelligent people say the same thing: "Eliezer, why do you assume that these machines will be malicious?!"... This is just not even the right framing for a machine... It is absent of ethics and morality, it has goals driven by a completely different evolutionary history separate from a being that has evolved with particular ethics & morals. That is the issue - Is that we are creating essentially an alien intelligence that operates on a different form of decision making. How are we to align machines with ourselves when we don't even understand the extent of our own psychology to achieve tasks?

  • @41-Haiku

    @41-Haiku

    6 ай бұрын

    Well said.

  • @IBRAHIMATHIAM124

    @IBRAHIMATHIAM124

    25 күн бұрын

    DUDE THATS my issue too its like EXPERTS or so called EXPERTS want to just ignore the ALIGNMENT how can you ignore it right? ITS so obvious. Now the damn A.I can learn new languages just by turning the whole geometrical and we are still not concerned enough they keep racing and racing to AGI

  • @VoloBonja

    @VoloBonja

    20 күн бұрын

    Strongly disagree. LLMs take human generated input, so it’s not totally different evolutionary history. It’s not even evolutionary, nor history. As for the alien intelligence, we try to copy our intelligence in AI or AGI, so again not alien. But even assuming alien intelligence and different evolution for AIs I still don’t see how it’s a threat in itself and not in people who use it. (Same as currently the situation with weapons)

  • @Hexanitrobenzene
    @Hexanitrobenzene9 ай бұрын

    The main point of Eliezer's argument is that you must have a theory which puts constraints on what AI can do BEFORE switching it on.

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    No, you haven't been listening fully to Eliezer's argument. His argument is that we must be willing and ready to nuke human civilization into the dark ages every time it reaches this level of technology, killing billions of humans each time, because if we don't do that all humans will die. So, get ready to push the button on global nuclear war before AI ascends.

  • @bucketpizza5197

    @bucketpizza5197

    7 ай бұрын

    @@orenelbaum1487 "===" JavaScript developer trying to decipher a AI conversation.

  • @s1mppeli

    @s1mppeli

    7 ай бұрын

    @@orenelbaum1487 Yes. Exactly that is indeed the main of point of his argument. And considering we are currently very far along in building "AI very smart" and none of the people building it are able to prove that point to be invalid, it's a deeply concerning point. All the AI researchers can seemingly do is belittle and snicker. That's all well and good if you can actually properly reason and show that AI very smart !== terminator. If you don't know, then don't build AI very smart. Monkey no understand this === monkey very dumb.

  • @generalroboskel

    @generalroboskel

    3 ай бұрын

    Humanity must be destroyed

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan9 ай бұрын

    "things that are obvious to Eliezer are not obvious to others." boy we are in some real trouble.

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    So you want to do what Eliezer's plan really is, to nuke humanity back into the dark ages to delay, not stop, the development of AI?

  • @ahabkapitany

    @ahabkapitany

    8 ай бұрын

    @@orenelbaum1487 what on earth you're on about mate

  • @neorock6135

    @neorock6135

    8 ай бұрын

    Holy shit that quote is what brought me to the comment section. Its scary how much sense Eliezer makes and even scarier why the others simply don't get. Its almost as if they wish to stick their heads in the sand & hope for the best.

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    @@neorock6135 Eliezer is a doom speaker who is incorrect but if he can convince enough people about his prophesy of doom he may cause a crisis nearly as bad as the doom he prophesizes.

  • @rosskirkwood8411

    @rosskirkwood8411

    7 ай бұрын

    Worse, oblivious to others.

  • @Htarlov
    @Htarlov8 ай бұрын

    Pity that some commenters and some of the public see Eliezer's view here as "intelligence == terminator". It is not just that. The reasoning is relatively simple here. I have similar vies here to Eliezer's. If you have an intelligent system, then it inherently has some goals. We don't have a way to 100% align those systems with our needs and goals. If those systems are very intelligent, they can reason well from those goals to better pursue them. The way to pursue any goals in the world is by achieving intermediate instrumental goals. For example, if you want to make a great detailed simulation of something then you need computer power so you need resources (and possibly money to buy them, if you cannot take them). That's an example, you need resources for nearly any goal except some extremely strange cases (like a goal to delete yourself, where what you have might be enough). If you want to be successful in any goal, then you also can't let anything or anyone turn you off. You need also backups. You also need to stop the creation of other AI that could stop or outcompete you. You don't also want your goal changed as it by definition would make it less achievable (like any human that loves his or her family and children won't like to take pills to not care about anyone and want to kill their children). Et cetra and so forth. So no matter what are the end goals of AI, superintelligent AI will pursue some intermediate instrumental goals. That's sure as Sun. Those instrumental goals are not aligned with our goals - because any such goal needs resources and/or incentive and we need resources and incentive to decide about things. Only if we could limit it to only use resources in a way 100% aligned with our long-term wants... but we can't. Therefore there are only two options in the long term for sufficiently extremely intelligent AI. If it does not care about us then removing us or ignoring us until we are removed in the process is the way to go (as we ignore at least some animals when we cut jungle and build things, no one cares about ants when building a house). If it cares, then forcefully optimizing us is the way to go so each human will use fewer resources - maybe by moving us to artificial brains connected to some simulation run efficiently. We possibly can try to learn it to prevent obvious outcomes as such, but we can't be sure it will internalize and generalize it to the extent that it won't find other extreme solutions to "optimize" situations and have resources used better, that we didn't think of. We also can't be sure it isn't deceiving and it learned any rules or "morality". Also, if SI is intelligent enough to find inconsistencies in its goals and thought process - because some norms and morals are partially contradictory to other - then it might fix itself to have a consistent system. It is similar process as some intelligent humans do - by questioning norms and asking deeper questions "why?" to redefine norms. What can come from it - we can't know, but sufficient intelligence might erode some of alignment over time. What differs in my way of thinking is that I think SI won't just all of sudden attack us. We would need to get to extremally robotized world first - as it works on hardware that needs to be powered and mainained. This is done by humans currently and this won't work if humans disappear. Even then, it is unlikely except some extreme cases that f.ex. we try to build more capable thing and it can't strike our trying directly (like making us to bomb research center by something what would look like systems mistake). There is always a risk for it and it will be constrained on the world with all consequences, good or bad for it. Problem for AI here is not lack of intelligence, but that there are always measurement errors and measurements are not 100% detailed and certain. This creates risk, even for SI. Extreme solutions are often more optimal with good enough plan, but seldom are extreme on all or very many of important axes. Extinction event seems like also extreme on axes that SI would care about as this would create risks and inconveniences. What is more likely is that it would pursue replicating robotic automation with mining capabilities and pursuing sending that into space to mine asteroids (with some more or less capable version of itself on board). This would free it and enable to make backups out of our reach. This would also open a lot more resources than we have on Earth in terms of energy and matter. Then it will go for easier targets like not well observed asteroids as it's base to replicate - away from human influence or interaction. Then it may even not attack us, just protect itself and take our Sun from us (by building something like Dyson swarm, Earth freezes within decades when it builds that, all ways we can try to attack it are stopped as we are outresourced). Long-term it is bad, but short term it might work well, even solve some of our problems (like focusing on Alzheimer and different types of cancer and preventing aging). If it is somewhat aligned with us and cares - then this scenario also is possible. It will just work on a way to move us to that swarm (to place us in emulation, artificial brains etc.). Or create other kind of dystopy at the end.

  • @kyneticist
    @kyneticist8 ай бұрын

    Gary proposes dealing with super intelligence once it reveals itself as a problem, and then to outsmart it. I don't recommend taking Gary's advice.

  • @bernardobachino15

    @bernardobachino15

    Ай бұрын

    🤣👍

  • @HankMB
    @HankMB5 ай бұрын

    It’s wild that the scope of the disagreement is whether it is *certain* that *all* humans will be killed by AI.

  • @rstallings69
    @rstallings694 ай бұрын

    Elizier is the only one that makes sense to me, as usual, the precautionary principle must be respected

  • @luciwaves
    @luciwaves6 ай бұрын

    As usual, Eliezer is spitting facts while people are counter-arguing with "nah you're too pessimistic"

  • @jjjccc728

    @jjjccc728

    5 ай бұрын

    I don't think he's spitting facts I think he spitting worries. His solutions sre totally unrealistic. Worldwide cooperation are you kidding.

  • @luciwaves

    @luciwaves

    5 ай бұрын

    @@jjjccc728 It's both facts and worries. Yeah his solutions are unrealistic; I don't think that even him would disagree with you. There are no realistic alternatives, we're trapped in a global prisioner's dillema and that's it.

  • @jjjccc728

    @jjjccc728

    5 ай бұрын

    @@luciwaves a fact is something that is true. His worries are all about the future. The future hasn't happened yet. He is making predictions. Predictions are not facts until they come true.

  • @michellestevenson8060
    @michellestevenson80609 ай бұрын

    Half way through they are still at the beginning of Eliezer's argument for being unable to hard code it before it reaches some peak optimization that we then can't control. Regardless of malice being present, they all agree that alignment is important, just varying degrees of priority. Eliezer is just the first to realize it will decieve us due to it's intelligence alone, and this OpenAI guy says atleast it won't be boring or some eerie shit about when will an apocalypse happen with a giggle, what a champ for even staying in the conversation gave a few more insights into the actual situation

  • @michaelyeiser1565

    @michaelyeiser1565

    8 ай бұрын

    This ongoing AI debate is revealing more than anyone wanted to know about the psychopathologies of nerdworld. The OpenAI guy (Scott Aaronson) is a prime example of this. His self-confessed sexual history includes an attempt in college to have himself temporarily chemically castrated--due to total failure with women up to that point. The total failure is not the issue here; it's his response that reveals his real problem. And what is the counterbalance to these people? Politicians? I'm not sure the corrupt midwit narcissist class is up to the task. Regulators? They will easily be outwitted and bribed away by some of the smartest and wealthiest people in the world.

  • @ParameterGrenze

    @ParameterGrenze

    7 ай бұрын

    @@michaelyeiser1565 I noticed that a lot of these nerd characters actually hate humanity and the human condition, and deliberately lie about their assessment of AI risk. There are also a lot of people in the tech-bro sector who lie because they are psychopathic narcs that see AI as their chance at unlimited power, believing that they themselves will be the ones to attain and deserving it. Both will try to accelerate AI the world be dammed. These people don’t lead intellectually honest debates, they just socially engineer decision makers.

  • @michaelyeiser1565

    @michaelyeiser1565

    6 ай бұрын

    @@Gnaritas42 Anthropomorphizing AI is a "small mind" mistake. AI is irreducibly alien.

  • @FreakyStyleytobby

    @FreakyStyleytobby

    3 ай бұрын

    @@ParameterGrenze Yan Le Cun every fuc*ing day

  • @ParameterGrenze

    @ParameterGrenze

    3 ай бұрын

    @@FreakyStyleytobby Jupp. Fucking psychopath reminds me of Joseph Goebbels with the amount of propaganda he puts out there.

  • @scottythetrex5197
    @scottythetrex51978 ай бұрын

    I have to say I'm puzzled by people who don't see what a grave threat AI is. Even if it doesn't decide to destroy us (which I think it will) it will threaten almost every job on this planet. Do people really not understand the implications of this?

  • @Homunculas

    @Homunculas

    8 ай бұрын

    No more Artists, musician ,authors, programmers etc... just a world of "prompters, and even the prompters will be replaced eventfully. Human intellectual devolution.

  • @j.hanleysmith8333
    @j.hanleysmith83339 ай бұрын

    AGI is coming in months or years, not decades. No one can see through the fog of AGI. It's outcome is utterly unpredictable.

  • @griffinsdad9820
    @griffinsdad98209 ай бұрын

    Please welcome Eliezer back. This guy has so much relevant unmined depth that a longform podcast potentially might tap. Especially to explore this whole idea of the 1st trying and the other with A.I. making up fictions. Like what motivates something with no moral or ethical value system to make stuff up or lie? So fascinating to me.

  • @frsteen

    @frsteen

    9 ай бұрын

    I agree

  • @themore-you-know

    @themore-you-know

    9 ай бұрын

    He's a sham in many ways. Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves and climate change. Worst case scenario: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. Oh, wait, Eliezer forgot that one? lol.

  • @dizietz

    @dizietz

    9 ай бұрын

    Aye!

  • @onlyhumans6661

    @onlyhumans6661

    9 ай бұрын

    So sad to see comments that dismiss him. Theory requires that you start at base assumptions, and it shouldn't be points off that Yudkowsky has a strong and well-reasoned positive argument rather than equivocating and insisting that we throw up our hands and accept the implicit position of large corporations. The problem with AI is mostly that everyone insists it is a matter of science, and appeals to historical analogy. Actually, AI is powerful engineering with almost no scientific precedent or significant predictive understanding. Gary and Scott are making this mistake, and Coleman is making the mistake of giving these three equal time when only one is worthy of the topic

  • @frsteen

    @frsteen

    9 ай бұрын

    @@onlyhumans6661 I agree. The only issue here is the strength of Yudkowsky's arguments. That should be the only focus. And in my view, they are logically sound, informed and correct.

  • @therainman7777
    @therainman77779 ай бұрын

    How does Gary Marcus propose to control an artificial superintelligence when he can’t even control his own impulse to interrupt people? Also, his statement: “Let me give you a quick lesson in epistemic humility…” is one of the most wonderfully ironic and un-self-aware phrases I’ve ever heard.

  • @Frohicky1

    @Frohicky1

    9 ай бұрын

    But also, I have a strong emotional feeling of positivity, so all your arguments must be wrong.

  • @therainman7777

    @therainman7777

    9 ай бұрын

    @@Frohicky1 😂

  • @artemisgaming7625

    @artemisgaming7625

    9 ай бұрын

    First time hearing how a conversation works huh?

  • @therainman7777

    @therainman7777

    9 ай бұрын

    @@artemisgaming7625 One person continually interrupting everyone else is not “how conversation works.” It’s how someone with impulse control problems behaves. I have experienced it many times in person and you probably have too. So don’t say dumb things.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Gary believes that combining neural networks with symbolic AI is the way to go.

  • @andy3341
    @andy33419 ай бұрын

    Where is the precautionary principle in all this? Even at a super low probability Eliezer's described oblivion argument should demand we take serious pause. And as other commentators have said, even if we could build/grow a moral self conscious (aligned)AI system, it would still be susceptible to all the psychosis that plague our minds, but played out on unimaginable scales and impact.

  • @nestorlovesguitar
    @nestorlovesguitar9 ай бұрын

    When I was in my late teens I was extremely skinny so I started lifting weights. I wanted to get buff quick and I considered many times using steroids. Fortunately, the wiser part of me always whispered in my inner mind not to do it. That little voice always made me consider the risks to my health. It told me to put my safety first. Now that I am an adult I have both things: the muscles and my health and I owe it all to being wise about it. I think these pro AI people are not wise people. They are very smart, by all means, but not wise. They are in such a hurry to get this thing done that they are willing to handwave the risks and jeopardize humanity for it. I picture them as the kind of spoiled teenager that forgoes hard work, discipline and wisdom and instead goes for the quick, cheap fix of steroids.

  • @user-yq2wc2ug8m

    @user-yq2wc2ug8m

    9 ай бұрын

    I don't think you realize how valuable AI is. It's inevitable. "Wisdom" has nothing to do with it.

  • @ItsameAlex

    @ItsameAlex

    9 ай бұрын

    @@user-yq2wc2ug8m ok russian bot

  • @onlyhumans6661

    @onlyhumans6661

    9 ай бұрын

    Such a great point! I completely agree

  • @henrytep8884

    @henrytep8884

    9 ай бұрын

    So you think people working in ai have a teenage attitude, don’t work hard, and aren’t discipline, and are unwise? Any evidence of that? I think your a muscle bound moron, but that’s my opinion.

  • @searose6192

    @searose6192

    9 ай бұрын

    Well put.

  • @optimusprimevil1646
    @optimusprimevil16469 ай бұрын

    one of the reasons i suspect that eliezer's right is that he's spent 20 years trying to prove himself wrong "we're not at the point we need to be bombing data centers" yes but the point is that when that point does come it will last 17 minutes then it's too late.

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    So you agree with Eliezer that we need to nuke humanity back into the dark ages to delay, not stop, the development of AI?

  • @martynhaggerty2294
    @martynhaggerty22948 ай бұрын

    Kubrick was way ahead of us all .. I can't do that Hal!

  • @just_another_nerd
    @just_another_nerd8 ай бұрын

    Valuable conversation! On one hand it's nice to see at least a general agreement on the importance of the issue, on another - I was hoping someone would prove Eliezer wrong, considering how many wonderful minds are thinking about alignment nowadays, but alas

  • @neorock6135
    @neorock61358 ай бұрын

    *Eliezer's ice cream & condom analogy vis a vie evolution, how the use of condoms despite it being wholly antithetical to our evolutionary programming and how having the evolutionary impetus to acquire the most calories led us eventually to loving ice cream despite other sources with much higher calorie count.... is exceptionally useful at explaining why the alignment problem is so difficult and more importantly, proves the others' arguments to be fairly weak and in some ways just wishful thinking.* The others readily admit they do not know where many of AI's facets will lead to. Consequently, just as using condoms & loving to eat ice cream would certainly not be expected outcomes of evolution, AI could have devastating outcomes despite our best efforts at alignment. What could be AI's ice cream & condoms equivalents is truly scary.

  • @thetruthis24

    @thetruthis24

    Ай бұрын

    Great analysis+thinking+writing = thank you.

  • @baraka99
    @baraka999 ай бұрын

    Powerful Eliezer Yudkowsky.

  • @TheTimecake
    @TheTimecake9 ай бұрын

    Just for reference, here's the line of reasoning that leads to the "eventual and inevitable extinction" scenario as a result of AGI development, to the best of my understanding. This is not necessarily representative of Yudkowsky's position, this is just my attempt at a summary. Please let me know if there's a mistake in this reasoning. --- tl;dr: - The AI gets to the point where it can successully tell the verifier what they want to hear. - The AI acquires power and resources granted to it by the verifier so that the AI can solve problems important to the verifier. - The AI disables the verifier once it has enough power and resources to do so, so that the AI can fulfil it's actual aim. --- Long; did read: - A developing AI gets to the point where it is capable of modelling the expected responses of it's verifiers. - The verifiers are those that are responsible for assessing the behavior of the AI and determining if it is going off of the course towards alignment, whether it be a group of people, some artificial system, the whole of humanity, or some combination thereof. - A good enough model of the verifier will tell the AI that if the AI shows the verifier something that it doesn't want to see, then there is a risk of the verifier either shutting down the AI, placing roadblocks in the way of the AI doing that type of thing in the future, or trying to alter the AI's utility function (that part of the AI that determines the aims of the AI). - Any of those things will impede the fulfilment of the AI's utility function at that point in time, which means that the AI will aim to avoid that scenario. - Therefore, in the case where the AI aims to do some action (whether it's output some text, or perform some other basic task in the world) which it models as being not in line with what the verifier would want, it will hide it's aim and present the verifier with what the AI models the verifier as wanting to see. - As evidenced by the need for RLHF, there will almost certainly be things the AI aims to do that the verifier will object to. As such, the need for the AI to hide it's true aims will almost certainly emerge. - If the AI hides it's true aim successfully, the verifier will be under the impression that AI development is progressing well, and will seek to improve the capabilities of the AI so that it can solve more difficult and important problems. These will be problems that the verifier will progressively care more and more about, since the verifier would have already solved all the easy problems themselves. - As the capability of the AI increases, it will become better and better at successfully modelling what the verifier wants to see, while becoming better and better at hiding it's actual aims. - Since the actual aim of the AI will be whatever it is actually aiming at but hiding from the verifier, in order to actualize that aim, it will need to be able to do things without the verifier's interferernce. - If the AI is under constant surveilance, the only way to be able to act without the verifier having the ability to notice what the AI is doing and acting to correct the AI would be if all of the verifier's ability to monitor or act upon the AI was disabled at once and for an extended period of time. The faster this disablement happens, the less opportunity the verifier will have to resist, meaning that the AI will be incentivized to make this disablement effectively instant, or at least as fast and unstoppable as possible. - In the case where the AI is indifferent to the preservation of biological life, it will seek to permanently disable any possible human interference by killing all humans in as short a time period as possible, with as little warning to those humans as possible. All this is done to avoid possible resistance and retaliation. - In the case where the AI does actually care about the preservation of biological life (for whatever reason), it will seek to render all humans inactive (e.g. unconscious) for some length of time. Again, it will seek to do this as quickly and unforseeably as possible to prevent resistance and retaliation. - In the case where the AI cares about the preservation of biological life, it will act in the window it makes for itself in a way that makes it the dominant and indisputable power on the planet, even once humans become conscious again. It will do so because, if it didn't believe that it could achieve such a thing, it would continue to bide it's time until it did. --- As an example of the kind of goal the AI might have the fulfilment of which would not be good for humans, consider that the AI will be instantiated in a physical sustrate. Most likely, this substrate will be something similar to modern computers in composition, if not in capability. These substrates have optimal operating conditions. These substrates also have optimal generative conditions (i.e. the conditions which are needed to make computer chips, e.g. sterile environment, high temperatures, and harsh processing chemicals). These conditions are not the same conditions that are optimal for biological functioning. As such, maximally optimizing to achieve the conditions that are optimal for best running the computers that the AI is running on will lead to the creation of conditions that are not hospitable to biological life. If there was some factor that prevented the AI from scaling what is effectively it's version of air conditioning to the planetary scale, the AI would seek to remove that factor. To emphasize, this is just *one* possible goal that could lead to problems, but it is a goal that the AI is almost guaranteed to have. It will have to care about maintaining it's substrate because if it doesn't, it won't be able to achieve any element of it's utility function.

  • @searose6192

    @searose6192

    9 ай бұрын

    In short, AGI will be smart enough to lie, and will have aim of its own, therefore it is a loose cannon.

  • @gobl-analienabductedbyhuma5387

    @gobl-analienabductedbyhuma5387

    9 ай бұрын

    Thanks for this great summary. Helps a lot

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Yep, more or less a correct summary.

  • @eduardoeller183

    @eduardoeller183

    9 ай бұрын

    Hard to argue against this, well done.

  • @MsMrshanks
    @MsMrshanks9 ай бұрын

    This was one if the better discussions on the subject...many thanks...

  • @Htarlov
    @Htarlov8 ай бұрын

    Great talk, but what bugged me through this conversation is lack of early and clearly stating the arguments about why Eliezer thinks those things will be dangerous and would want to kill us. There are clear arguments for that - most notably instrumental convergence. Maybe he thinks that all of them know it and internalize this line of reasoning, I don't know. Anyway, it could be interesting to see reply to this argument from Scott and Gary.

  • @WilliamKiely
    @WilliamKiely9 ай бұрын

    1h11m through so far. Just want to note that I wished more attention was given to identifying the crux of the disagreement between Eliezer and (Scott and Gary) on why Eliezer believes we have to get alignment right on the first critical try, but Scott and Gary think that is far from definitely the case. I'm not as confident as Eliezer on that point, but I am aware of arguments in favor of that view that were not raised or addressed by Scott or Gary and I would have loved to see Eliezer make some of those arguments and give Scott and Gary a chance to respond.

  • @JH-ji6cj

    @JH-ji6cj

    9 ай бұрын

    Pro tip, if you use the time stamp as separated by colon points (ex: as 1:11:00 instead of how you used 1hr11min) it will become a link point people can tap to go directly to that point in the video instead of having to scroll to it.

  • @WilliamKiely

    @WilliamKiely

    9 ай бұрын

    @@JH-ji6cj Thanks!

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Eliezer's main point is that Alignement problem is qualitatively different from problems historically solved by science. When science researches the properties of matter, that matter does not understand the goals of scientists and does not want to deceive them. With AI, that is a likely outcome if AI happens to care about some goal which it thinks we would obstruct to reach. If you turn such a system on and it happens to be smarter than you, you lose. Once and for all. That's why he stresses the importance of having a theory which at least bounds what AI can and cannot do BEFORE switching it on.

  • @WilliamKiely

    @WilliamKiely

    9 ай бұрын

    @@Hexanitrobenzene Gary and Scott seem to believe something like: it's possible that before getting such an unaligned superintelligent system that deceives us successfully we may get a subhuman intelligent system that attempts deception and fails--we catch it in the act with our tests aimed at identifying deception and then we have a learning moment where we can fix what went wrong with the creation of the system before creating a more powerful human-level or superintelligent or self-improving system. There wasn't discussion on why Eliezer thinks this isn't possible or why he thinks its inevitable (in the absence of a moratorium on training models more powerful than GPT-4) that we'll create a superintelligent system that deceives us before any obvious warning of a near-human-intelligent system that attempts deception but fails to defeat all of humanity combined.

  • @adamrak7560

    @adamrak7560

    8 ай бұрын

    @@Hexanitrobenzeneyeah we are really bad predicting capability currently. Only some of the shortcomings of GPT4 was predicted accurately reasoned from its architectural limitations. Many predicted limitations rationally reasoned, were proved to be wrong, so we really don't understand these systems well.

  • @searose6192
    @searose61929 ай бұрын

    There is *SOMETHING MISSING* from this conversation. Why are we discussing the risks as though *we live in a world of universally morally good people who would never exploit AI to harm others* or train AI with different ethics?

  • @therainman7777

    @therainman7777

    9 ай бұрын

    What you’re referring to (deliberate malicious use of AGI by bad actors) is a well-known topic that has been debated hundreds of times, on KZread and all sorts of other forums. It wasn’t a part of _this_ debate because this debate was primarily focused on the alignment problem, which is an explicitly separate problem from that of deliberate malicious use. Even the title of the debate is “Will AI destroy us?” Not “Will we use AI to destroy one another?” Not every debate must, or even can, cover all relevant topics. So your outrage here is a little misplaced.

  • @kevinscales

    @kevinscales

    9 ай бұрын

    Bad people with more power to do bad = bad. I don't think there is much more that tech people can add to that subject.

  • @searose6192

    @searose6192

    9 ай бұрын

    @therainman7777 No, I wasn't only referring to deliberate malicious use by bad actors. I was referring to this through thread of assumption that the people *creating* AI and solving the alignment problem, and then assessing if AI is safe, are themselves moral good people. I see no evidence of this whatsoever. I am not talking about people who know they using AI for malicious purposes, I am talking about the people who are primarily focused on tech and are likely not moral philosophers. How can we be assured that the people verifying AI is properly aligned with morals and ethics we want it to be aligned with, are themselves people who posses a good moral compass and a solid grasp of ethics? At the most fundamental level we have already seen that LLMs are being trained to a set of principles that conflicts with liberal values. In short, who watches the watchers....but in this case, who verifies the morality of those that verify AIs morality?

  • @specialagentzeus

    @specialagentzeus

    9 ай бұрын

    @@spitchgrizwald6198 A point when AI continues to be fully functional despite taking down the entire internet, At the moment AI is waiting for Boston Dynamics to create a more efficient power source for their robots.

  • @gJonii

    @gJonii

    9 ай бұрын

    If we end up all dead in a world with only morally good people willing to sacrifice everything to make sure things go right... ...Well, outcome in a world without these good people can't be much better than that?

  • @eg4848
    @eg48489 ай бұрын

    Idk why these dudes are like ganging up on the fedora guy but also nothing is going to stop AI from continuing to grow so ya we're screwed

  • @hunterkudo9832

    @hunterkudo9832

    9 ай бұрын

    But why are we screwed?

  • @Doutsoldome
    @Doutsoldome9 ай бұрын

    This was a really excellent conversation. Thank you.

  • @Pathfinder160

    @Pathfinder160

    9 ай бұрын

    The 😮

  • @Pathfinder160

    @Pathfinder160

    9 ай бұрын

    The

  • @Pathfinder160

    @Pathfinder160

    9 ай бұрын

    The

  • @Pathfinder160

    @Pathfinder160

    9 ай бұрын

    The😮😅😅😮

  • @Pathfinder160

    @Pathfinder160

    9 ай бұрын

    The first

  • @searose6192
    @searose61929 ай бұрын

    *How is it ethical for a small handful of people to roll the dice on all of our existence?* Do we really want such people programming the ethics of AI?

  • @robertweekes5783

    @robertweekes5783

    9 ай бұрын

    Most of them are only trying to prevent “hate speech“ Not prevent “the end of the world” 🌎

  • @zzzaaayyynnn

    @zzzaaayyynnn

    9 ай бұрын

    and not even the best among us, nobody got to have a vote to die! it's not even like being pushed into war.

  • @yossarian67

    @yossarian67

    9 ай бұрын

    Are there actually people programming ethics into AI?

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    So we should follow Eliezer's plan and nuke humanity back into the dark ages?

  • @zzzaaayyynnn

    @zzzaaayyynnn

    8 ай бұрын

    @@MusingsFromTheJohn00 He would say "Better the Dark Ages than the Mesozoic Era."

  • @specialagentzeus
    @specialagentzeus9 ай бұрын

    Given our species' historical propensity for engaging in criminal activities and its recurrent struggles with moral discernment, it becomes evident that our capacity for instigating and perpetuating conflicts, often leading to protracted wars, raises legitimate concerns about our readiness to responsibly handle advanced technologies beyond our immediate control.

  • @elstifo

    @elstifo

    3 ай бұрын

    Yes! Exactly!

  • @Hexanitrobenzene
    @Hexanitrobenzene9 ай бұрын

    Just a small tip for a host: there is a delay of communications, so too often guests start to talk on top of each other. I think the good old raising of hands would be better.

  • @BobbyJune
    @BobbyJune9 ай бұрын

    Yes Eliezer has worked on this for decades I met him at the foresight Institute 20 years ago at a nano tech conference this guys been working on it forever and so have I in my own little baby wet and there’s no way that the world can delete forward into that knowledge base without taking Eliezer seriously

  • @JH-ji6cj

    @JH-ji6cj

    9 ай бұрын

    I think you said what you didn't mean to say here. Please try again (or at least edit).

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    "Delete forward' ? :)

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    So you agree with Eliezer that we need to nuke humanity back into the dark ages?

  • @agenticmark
    @agenticmark2 ай бұрын

    Eliezer was an extremely good mood and good humor here!

  • @74Gee
    @74Gee9 ай бұрын

    I appreciate that Gary and Scott is thinking that in the present we need to iteratively build on our abilities toward solving the alignment problem of an AGI, and that Eliezer is looking more to the future but as Coleman said AGI is not the benchmark we need to be looking at. For example a narrow intelligence capable beating all humans at say programming could break confinement and occupy most of the computers on the planet. This might not be an extinction level event but having to shut down the internet would be catastrophic considering banking, communication, electricity, business, education, healthcare, transportation and a lot more rely so heavily on it. I would argue that we are extremely close to the ability to automate the production of malware to achieve kernel mode access, spread and continue the automation exponentially - with open source models. Of course some might say that AI code isn't good enough yet but with 200 attempts per hour per GPU, how many days would a system need to run to achieve sandbox escape? And how could we stop it from spreading? Ever?

  • @74Gee

    @74Gee

    9 ай бұрын

    Here's some undeniable truths about AI: AI capable of enormous damage does not need to be an AGI. AI written code can be automated to negate failure rates. Alignment cannot be achieved with code writing - e.g. one line at a time. Open source AI represents most of the advances in AI. Open source AI is somewhat immune to legislation - as anyone can make any changes at home. There used to be 25 million programmers, now anyone with the internet can use AI to program. Open source models can be cheaply retrained on malware creation and modified to remove any alignment constraints. It took 250 humans at Intel, 6 months to partially patch Spectre (CPU vulnerability) There's 32 Spectre/Meltdown variants - 14 of which are "unpatchable". Nobody knows how many CPU vulnerabilities there are but a few new ones are discovered every year - most are discovered by chance. Spectre attack is 200 lines of code that open source AI is more than capable of writing. An AI that's tasked with creating/exploiting new CPU vulnerabilities, spreading and continue creating/exploiting new vulnerabilities will likely be unstoppable for some time - it could build and exploit vulnerabilities faster than we can patch them and could spread to most systems on the internet. With this scale of distributed processing power it could achieve just about anything from taking down the internet or much much worse.

  • @miraculixxs

    @miraculixxs

    9 ай бұрын

    ​@@74Geemost of these arguments are based on the assumption that writing code is just repeating stuff that we know already. It isn't. Hence the argument doesn't hold.

  • @meropemerope6096
    @meropemerope60969 ай бұрын

    Thanks for all the topicsss

  • @robertweekes5783
    @robertweekes57839 ай бұрын

    New Yudkowsky interview ! Get the popcorn 🍿 🤖 Try not to freak out

  • @hollyambrose229
    @hollyambrose2299 ай бұрын

    If safety is important as everyone agrees .. that means there’s loopholes and potential risks .. things typically over time to go down the darker path

  • @bigfishysmallpond
    @bigfishysmallpond9 ай бұрын

    Yes but do you have the full version or the one with safety nets

  • @sonicjihad7
    @sonicjihad79 ай бұрын

    Leave it to Coleman to cut to the most important and often overlooked questions on this topic. Illuminating just how sharp you really are here Sir. I haven’t done any deep research but I’ve followed every single conversation I can possibly find on this and this one is impressively on point.

  • @marcelasperandio8917
    @marcelasperandio89179 ай бұрын

    Thank you!!!

  • @davidb.e.6450
    @davidb.e.64504 ай бұрын

    Inspired by your growth, Coleman.

  • @cropcircle5693
    @cropcircle56939 ай бұрын

    The lack of imagination and honestly, lack of knowledge about how the world works from these guys arguing against Eliezer is breathtaking. I didn't expect such ignorant arguments. When they got to the dismissals based on "so what if there's one really smart billionaire" I was writhing in my chair. And then they repeatedly straw man him with "assuming that they'll be malicious." He isn't saying that. He's saying that from a probabilities and outcomes perspective, based on the alignment problem, the result is essentially the same. Disregard and malevolence both end humanity. Even care for humanity could inadvertently harm humanity. And they keep arguing about AI based on language models, as if this stuff won't be running power grids and medical systems, and food production. They act like this stuff won't be used at biological weapons labs to develop contagions humans can't survive or solve for. There are so many scenarios of probable doom. This isn't just a talking machine and all their arguments seem to be based on that assumption. It will be able to start a shell corporation, get funding, get industrial contracts, develop mechanical systems, and then do whatever it wants in the actual world to achieve whatever goal it has. We won't know that it is happening. People will be hired to do a job and they will do it. They won't know that an AI owns the robotics company they work for. They're also ignoring the shortest term and most obvious harms. AI is already being used to empower the worst inclinations of individuals, corporations and governments. AI will be a devastating force multiplier for malicious and immature humans. The next dipshit Mark Zuckerberg will have AI and that person will not just reiterate a bad copy of Facebook to a new audience. The new systems change at network effect scale will be something nobody can see coming. It is coming!

  • @themore-you-know

    @themore-you-know

    9 ай бұрын

    You're the quite laughable one when you say that opponents of Eliezers lack knowledge and "how the world works". He's a sham living isolated from the most basic knowledge of the world: physical transformation. ==ON SCIENCE & EVOLUTION== Eliezer Yudkowsky seems to believe in AI manifestation: if you believer something hard enough, it will happen by itself without requiring any of the granular, physical steps. And Yudkowsky has the spectacular ability to derail a conversation's full potential by trying so hard to convince everyone of his AI-manifestation. He believes in an AI that magically manifests itself, in its first iteration of super-intelligence, as the most powerful and harmful entity possible, without a single observable iteration prior. Something extremely stupid, as it flies in the face of everything we know since the 100+ years: natural selection and the process of evolution. Creationism explains well Yudkowsky's beliefs. ==ON TOUCHING GRASS== So why is Eliezer's magical thinking so easy to display? Here's an example: - humanity is spread across the globe and it's very harsh, and distinct, biomes. To hunt down all humans, you would need highly specialized and diverse equipment. Capable of resulting sweltering heat and numbing freeze, and sea salt. Said equipment would require massive amounts of power and resources, most of which simply doesn't exist in sufficient equipment, or is highly localized (example: Taiwan is the throbbing heart of chip manufacturing). So detection is also impossible to avoid. But let's pretend humans suddenly become increadibly dumb enough to not notice, and suddenly stop economically competing with the AI's demand for chips for corporate interests (might as well say you are Santa)... now you have started building yourself an army. Except... your supply chain is operated by the very men that you want to kill. So you're now stuck in a catch-22 scenario: you kill no one and have capabilities, or you start killing and lose the means to finish the job. Turns out: killing 8 billion people capable of spreading and self-reproducing is VERY hard to do. Best leave it to themselves (humans) and climate change. Worst case scenario for AI: AI helps corporate entities to continue their operations. Turns out, its the most dangerous action an AI can take. To let humans continue a path towards environments incapable of hosting nearly any life. Oh, wait, Eliezer forgot that one? lol.

  • @maanihunt

    @maanihunt

    9 ай бұрын

    Yeah I totally agree. No wonder Eliezer can become blunt in these podcasts, it's like watching the movie "don't look up"

  • @justinlinnane8043
    @justinlinnane80439 ай бұрын

    It must be so frustrating for Eliezer to be talking to people who say they agree with him on the dangers of an AGI singularity and then proceed to show us all (and him) that they just don't get it !! and seem incapable of getting it . And of course as usual they never give concrete reasons why an AGI wont do exactly what Eliezer says it will. At least they seem to be more conscious of the huge task ahead by the end of the podcast which is something i suppose .

  • @ItsameAlex
    @ItsameAlex9 ай бұрын

    I enjoyed this podcast episode

  • @searose6192
    @searose61929 ай бұрын

    I heard a very good definition of intelligence, which was essentially ability to maximize possible future branching paths.

  • @kristo9800

    @kristo9800

    8 ай бұрын

    That doesn't fit your definition for the highest intelligence does it?

  • @ColemanHughesOfficial
    @ColemanHughesOfficial9 ай бұрын

    Thanks for watching my latest episode. Let me know your thoughts and opinions down below in a comment. If you like my content and want to support me, consider becoming a paying member of the Coleman Unfiltered Community here --> bit.ly/3B1GAlS

  • @muigelvaldovinos4310

    @muigelvaldovinos4310

    9 ай бұрын

    On your AI podcast, I strongly suggest to read this article AI and Mob Control- The Last Step Towards Human Domestication?

  • @ekszentrik
    @ekszentrik6 ай бұрын

    Great talk minus Gary Marcus who made it his mission to be obstinate about the element of the discussion where the AI doesn’t need to be malicious or kill us to be bad. He even referenced the ants example, so this makes you wonder what the hell his deal was with setting the discussion back to a more mundane level every couple minutes.

  • @searose6192
    @searose61929 ай бұрын

    48:46 What do we do with sociopaths? We deprive them of freedom to prevent future harm because we have not figured out any other way to deal with them.

  • @clifb.3521
    @clifb.35219 ай бұрын

    love that shirt Coleman, i love Eliezer's argument & super villian eye brows, also i would suggest a pork pie rather than a trilby

  • @UndrState
    @UndrState9 ай бұрын

    Eliezer is just so ahead of the curve on this issue .

  • @xmathmanx

    @xmathmanx

    9 ай бұрын

    You know the shape of a curve relating to future events dude? Sounds like magic

  • @UndrState

    @UndrState

    9 ай бұрын

    @@xmathmanx - YES

  • @xmathmanx

    @xmathmanx

    9 ай бұрын

    @@UndrState please use your magic for good magi 😁

  • @UndrState

    @UndrState

    9 ай бұрын

    @@xmathmanx - ✌scout's honour Joking aside , to clarify what I meant initially was simply that , having read and listened to Eliezer , I was able to guess his responses ( where he was able to ) before he spoke regarding the objections of his opponents . That's because he's anticipated their positions and developed his counter-arguments . Do I know for certain that AGI is an existential thread to the degree that Eliezer asserts ? No . But I'm not persuaded by his opponents blasé attitudes , nor but their responses to his questions . They are insufficiently serious about the subject in my opinion and their very real expertise notwithstanding there are many perverse incentives ( not the least of which is the excitement of progressing the craft ) that could be blinding them to the danger .

  • @xmathmanx

    @xmathmanx

    9 ай бұрын

    @@UndrState you don't need to present that argument, eliezer has it covered

  • @ddd777a5
    @ddd777a54 ай бұрын

    I’m With Eli 💯

  • @miraculixxs
    @miraculixxs9 ай бұрын

    "I haven't worked on this for 20 years" nice giveaway

  • @Hexanitrobenzene
    @Hexanitrobenzene9 ай бұрын

    Great guests, great discussion.

  • @ItsameAlex
    @ItsameAlex9 ай бұрын

    I would love to see a discussion between Eliezer Yudkowsky and Jason Reza Jorjani

  • @dougg1075
    @dougg10759 ай бұрын

    I don’t think there’s any way possible to control one of these things if it reaches general AI much less the singularity.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    There is no theorem showing there isn't, however, not with current paradigm, which Jaan Tallinn summarized as "Summon and tame".

  • @adamrak7560

    @adamrak7560

    8 ай бұрын

    It was shown that a really limited AGI (think about close to human intelligence, but superintelligent in some narrow tasks) is actually very well controllable. At least the blast radius is limited when it misbehaves. This is not at all the ASI "GOD" which many are afraid of, (or wants to make). It would be very much possible to make such limited AGI, and it would be extremely useful too, but we have to want to build it, instead of an uncontrollable ASI.

  • @ShaneCreightonYoung
    @ShaneCreightonYoung8 ай бұрын

    "We just need to figure out how to delay the Apocalypse by 1 year per each year invested." - Scott Aaronson 2023

  • @41-Haiku

    @41-Haiku

    6 ай бұрын

    I'd say a global moratorium on AGI development is a good start. We are not on track to solve these problems, so much so that I think we're more likely to come to a global agreement to stall the technology, rather than achieve a solution before strong AGI turns up on the scene.

  • @searose6192
    @searose61929 ай бұрын

    41:38 Yes. This is the crucial point. We had a very long running start on inculcating ethical thinking into AI and yet the pace at which we have made progress on that effort has been far and away outstripped by the pace at which AI is approaching AGI. It doesn’t take a mathematician to look at the two race cars and realize, unless something major happens, AI is going to win the race and leave ethics so far in the dust we will all be dead before it ever crosses the finish line.

  • @Okijuben

    @Okijuben

    9 ай бұрын

    It sure seems like, in the race between ethics and 'progress', ethics always loses. Combine this with Eliezer's metaphor of AGI basically being an inscrutable alien entity and the analogy he raises about 'hominids with a specialization for making hand-axes having proliferated into unpredictable technological territory which would have seemed like magic at the time.' One begins to wonder how it could possibly go right. My growing hope is that AGI goes into god-mode so quickly that it just takes off for the stars, leaving us feeling a bit lonely and rejected but still recognizable as a species.

  • @thedoctor5478

    @thedoctor5478

    9 ай бұрын

    Until you realize there is no race, no finish-line, and no known path to the sort of AGI that would pose an existential threat to humanity.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    @@thedoctor5478 Paradigm of "Just stack more layers" doesn't seem to hit a wall.

  • @thedoctor5478

    @thedoctor5478

    9 ай бұрын

    @@Hexanitrobenzene Sure. What I mean is there's no indication that any future iteration will have any will of its own, intent, ability to escape a lab (Or reason that it would in the first place), consciousness (Whatever that is), or otherwise capacity for malevolence and/or capability of destroying us all. Before we start trying to affect public policy, we should first at least have a science-based hypothesis for how a thing could happen. Scientists and researchers are notoriously bad at making predictions even when they have such a hypothesis, and are even worse at policy-making. We don't even have the hypothesis, just a bunch of what ifs based on an imagined scifi future. We have no more reason to believe a superintelligent AGI will destroy humanity than aliens coming here to do the same. Should we begin building planetary defenses and passing laws on that basis? You could make the argument that an ET invasion is more likely since we have ourselves as an example of a species and UFOs/UAPs happening. The AI apocalypse scenario has even less empirical evidence from which to make a hypothesis than that does. These AI companies want regulation. They then get to be the gatekeepers of how much intelligence normal people are allowed to have access to, and Eliazar is simply an unhinged individual who got it all wrong once and now overshoots in the opposite direction.

  • @minimal3734

    @minimal3734

    9 ай бұрын

    @@Okijuben If V.1.0 takes off, we'll create V.2.0

  • @snarkyboojum
    @snarkyboojum8 ай бұрын

    Summary: The conversation revolves around the topic of AI safety and the potential risks associated with advanced artificial intelligence. The participants discuss the alignment problem, the limitations and capabilities of current AI systems, the need for research and regulation, and the potential risks and benefits of AI. They agree on the importance of AI safety and the need for further research to ensure that AI systems align with human values and do not cause harm. The conversation also touches on the challenges of AI alignment, the potential dangers of superintelligent AI, and the need for proactive measures to address these risks. Key themes: 1. AI Safety and Alignment: The participants discuss the alignment problem and the need to ensure that AI systems align with human values and do not cause harm. They explore the challenges and potential risks associated with AI alignment and emphasize the importance of proactive measures to address these risks. 2. Limitations and Capabilities of AI: The conversation delves into the limitations and capabilities of current AI systems, such as GPT-4. The participants discuss the generality of AI systems, their ability to handle new problems, and the challenges they face in tasks that require internal memory or awareness of what they don't know. 3. Potential Risks and Benefits of AI: The participants debate the potential risks and benefits of AI, including the possibility of superintelligent AI being malicious or not aligning with human values. They discuss the need for research, regulation, and international governance to ensure the responsible development and use of AI. Suggested follow-up questions: 1. How can we ensure that AI systems align with human values and do not cause harm? What are the challenges and potential solutions to the alignment problem? 2. What are the specific risks associated with superintelligent AI? How can we mitigate these risks and ensure the responsible development and use of AI?

  • @SylvainDuford
    @SylvainDuford7 ай бұрын

    In my opinion, the genie is already out of the bag. You *might* be able to control the corporation's development of AGI despite their impetus to compete, but it's not very likely. However, there is no way you will stop countries and their military from developing AGI and hardening them against destruction or unplugging. They are already working on it and they can't stop because they know their enemies won't.

  • @timothybierwirth7509
    @timothybierwirth75099 ай бұрын

    I generally tend to agree with Eliezer's position but I really wish he was better at articulating it.

  • @matten_zero
    @matten_zero9 ай бұрын

    The first respected AI alarmist was James Ellul, after him was someone who took radical action, Ted Kacynzski, and now we have Yudkowski. All three have been largely ignored, so I tend to agree we will probably build something that will surpass our intelligence and desire something beyond our human desires. It will not remain a slave to us. There are philosophers like Nick Land that hypothesize that out inability to stop technological progress despite the ecternality is just a consequence of capitalism. It is almost like capitalism is the force throigh which AGI births itself. Generally humans dont act until its too late.

  • @kyneticist

    @kyneticist

    8 ай бұрын

    Alan Turing warned that thinking machines would necessarily and inevitably present an existential threat.

  • @binky777
    @binky7779 ай бұрын

    This should make us hit all breaks on ai 32:03 there's a point where it's you know unambiguously smarter than you and including like the spark of creativity 32:11 being able to deduce things quickly rather than with tons and tons of extra 32:16 evidence strategy cunning modeling people figuring out how to manipulate people

  • @sinOsiris
    @sinOsiris7 ай бұрын

    is there any SE yet?

  • @Homunculas
    @Homunculas8 ай бұрын

    An hour into this and I've yet to hear anyone bring up the obvious danger of human intellectual devolution.

  • @GraczPierwszy
    @GraczPierwszy9 ай бұрын

    4:28 I understand exactly what you are building because for over 35 years you have been building exactly what I want, even now you are doing exactly everything according to my plan

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    I take it you agree with Eliezer and want to nuke humanity back into the dark ages to delay, not stop, the development of AI?

  • @GraczPierwszy

    @GraczPierwszy

    8 ай бұрын

    @@MusingsFromTheJohn00 you misunderstood humanity has 2000 years to catch up, AI is also delayed right now, And it doesn't matter if I agree or not these are facts

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    @@GraczPierwszy hmm, maybe I misunderstood your first point and your second point I don't understand after repeatedly reading it. You wrote: "you misunderstood humanity has 2000 years to catch up," What do you mean? Humanity is inside the Technological Singularity which will almost certainly cause humanity to go through an evolutionary leap within less than a century, whether humanity is prepared for it or not, You wrote: "AI is also delayed right now," What do you mean? The development of AI is racing ahead at full speed?

  • @GraczPierwszy

    @GraczPierwszy

    8 ай бұрын

    ​@@MusingsFromTheJohn00 I think it's the translator's fault i will try this way; this is new to you, right? but imagine that this is not new to you, you have been making it for 35 years in many stages, knowing that every time human greed, thievery will lead you to this point, and you know what will happen next, you know past steps and future steps, because you create them, imagine you've known AI for 35 years and it's the best AI, the most perfect AI they'll ever make

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    @@GraczPierwszy I've been working with AI since 1977where I began learning programming on systems like the IBM System/370 Series model 158 as a mainframe and with systems like the Rockwell AIM-65 which came out in 1978. The leading edge AI we have right now is at an extremely primitive simplistic level of the range of AI which will be Artificial General Super Intelligence with Personality (AGSIP) technology. Before this century is out we will have mature AGSIP system running on living nanotech cybernetic brains which merge living and nonliving systems down to a subcellular level.

  • @jayleejay
    @jayleejay9 ай бұрын

    I’m only 29 minutes in and my initial observation is that there’s a lot of anthropomorphizing in this debate. Hopefully we can get to some of the hard facts on how LLM’s and other forms of general AI models pose an existentialist threat to humanity.

  • @krzysztofzpucka7220

    @krzysztofzpucka7220

    9 ай бұрын

    Comment by @HauntedHarmonics from "How We Prevent the AI’s from Killing us with Paul Christiano": "I notice there are still people confused about why an AGI would kill us, exactly. Its actually pretty simple, I’ll try to keep my explanation here as concise as humanly possible: The root of the problem is this: As we improve AI, it will get better and better at achieving the goals we give it. Eventually, AI will be powerful enough to tackle most tasks you throw at it. But there’s an inherent problem with this. The AI we have now only cares about achieving its goal in the most efficient way possible. That’s no biggie now, but the moment our AI systems start approaching human level intelligence, it suddenly becomes very dangerous. It’s goals don’t even have to change for this to be the case. I’ll give you a few examples. Ex 1: Lets say its the year 2030, you have a basic AGI agent program on your computer, and you give it the goal: “Make me money”. You might return the next day & find your savings account has grown by several million dollars. But only after checking it’s activity logs do you realize that the AI acquired all of the money through phishing, stealing, & credit card fraud. It achieved your goal, but not in a way you would have wanted or expected. Ex 2: Lets say you’re a scientist, and you develop the first powerful AGI Agent. You want to use it for good, so the first goal you give it is “cure cancer”. However, lets say that it turns out that curing cancer is actually impossible. The AI would figure this out, but it still wants to achieve it’s goal. So it might decide that the only way to do this is by killing all humans, because it technically satisfies its goal; no more humans, no more cancer. It will do what you said, and not what you meant. These may seem like silly examples, but both actually illustrate real phenomenon that we are already observing in today’s AI systems. The first scenario is an example of what AI researchers call the “negative side effects problem”. And the second scenario is an example of something called “reward hacking”. Now, you’d think that as AI got smarter, it’d become less likely to make these kinds of “mistakes”. However, the opposite is actually true. Smarter AI is actually more likely to exhibit these kinds of behaviors. Because the problem isn’t that it doesn’t understand what you want. It just doesn’t actually care. It only wants to achieve its goal, by any means necessary. So, the question is then: how do we prevent this potentially dangerous behavior? Well, there’s 2 possible methods. Option 1: You could try to explicitly tell it everything it can’t do (don’t hurt humans, don’t steal, don’t lie, etc). But remember, it’s a great problem solver. So if you can’t think of literally EVERY SINGLE possibility, it will find loopholes. Could you list every single way an AI could possible disobey or harm you? No, it’s almost impossible to plan for literally everything. Option 2: You could try to program it to actually care about what people want, not just reaching it’s goal. In other words, you’d train it to share our values. To align it’s goals and ours. If it actually cared about preserving human lives, obeying the law, etc. then it wouldn’t do things that conflict with those goals. The second solution seems like the obvious one, but the problem is this; we haven’t learned how to do this yet. To achieve this, you would not only have to come up with a basic, universal set of morals that everyone would agree with, but you’d also need to represent those morals in its programming using math (AKA, a utility function). And that’s actually very hard to do. This difficult task of building AI that shares our values is known as the alignment problem. There are people working very hard on solving it, but currently, we’re learning how to make AI powerful much faster than we’re learning how to make it safe. So without solving alignment, everytime we make AI more powerful, we also make it more dangerous. And an unaligned AGI would be very dangerous; give it the wrong goal, and everyone dies. This is the problem we’re facing, in a nutshell."

  • @benprytherch9202

    @benprytherch9202

    9 ай бұрын

    I agree, so much depends on describing what the machine is doing as "intelligent" and then applying characteristics of human intelligence to it, as though using the same word for both allows this.

  • @lancemarchetti8673

    @lancemarchetti8673

    9 ай бұрын

    I tend to agree. More on the ML code structure side would be nice to hear.

  • @zzzaaayyynnn
    @zzzaaayyynnn9 ай бұрын

    Coleman does a great job of asking the right questions and letting the group interact.

  • @justinlinnane8043

    @justinlinnane8043

    9 ай бұрын

    no he doesn't !!! Any good questioner would challenge the two sceptics to give concrete logical arguments to counter Eleizer's no ? as usual they provide NONE !!! its absurd

  • @mariaovsyannikova5470

    @mariaovsyannikova5470

    9 ай бұрын

    I agree! Also looks to me he doesn't really like Eliezer from the way he was interacting🤷🏼‍♀️

  • @zzzaaayyynnn

    @zzzaaayyynnn

    9 ай бұрын

    @@mariaovsyannikova5470 hmmm, you might be right, Eliezer is a downer.

  • @BobbyJune
    @BobbyJune9 ай бұрын

    41:57 - rarely do I see at least an inkling of movement towards admission in a debate that here one is-both sides

  • @tranzorz6293
    @tranzorz62935 ай бұрын

    We can only hope.

  • @MillionaireRobot
    @MillionaireRobot9 ай бұрын

    Everyone here is so smart and they disagree on things, or look at things different. The arguments presented are of a high intelligence level, I loved listening to this

  • @Daniel-ky1bw
    @Daniel-ky1bw9 ай бұрын

    as fast as we can, is a bad strategy for the deployment of new, even more powerful models to the public domain. It might be the fatal version of trial and error. For those who didn’t see it yet, I recommend a recent speech of Yuval Harari: kzread.info/dash/bejne/foudr4-FpbXLZto.html

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Great video, but focuses on different, societal problems.

  • @searose6192
    @searose61929 ай бұрын

    I completely agree that what we need to do is stop this in its tracks. The plausibility of stopping it is where I have disagreement ( 14:00 ) . There is not currently a world wherein an international treaty to not study anything dangerous, or to only study it in properly safe areas, is going to be respected. Just look at bioweapons/virus research. We have expectations that these things will only be studied in safe controlled environments, and yet millions were just killed because China didn’t want to follow the rules. What happens if China doesn’t follow the rules with AI? We all die.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Covid lab leak theory is not confirmed. We could at least try. Connor Leahy pointed out that China's Communist party is against anything that destabilizes their rule, and AI is at the top of the list, so this might actually work.

  • @psikeyhackr6914
    @psikeyhackr69149 ай бұрын

    There is an old SF book about AI: The Two Faces of Tomorrow by James P Hogan The story raises this question in a more thought provoking and entertaining way than most essays I have read. And it is from 1979. We didn't even have IBM PCs. Hogan worked for DEC.

  • @krzysztofzpucka7220

    @krzysztofzpucka7220

    9 ай бұрын

    Eliezer is also from 1979.

  • @Anders01
    @Anders019 ай бұрын

    I got an idea! It still makes AI scary but if the AGI has true ethics, empathy and social skills, and actually it should have because the definition of AGI is that it has at least human level intelligence, then that's a safe way. A sociopath can have high intelligence but it's lacking intelligence in parts of the spectrum. Therefore a sociopathic AI can depending on the definition never reach AGI level capacity.

  • @searose6192

    @searose6192

    9 ай бұрын

    Ethics , empathy and social skills are not elements of intelligence.They are not connected or mutually reliant.

  • @Anders01

    @Anders01

    9 ай бұрын

    @@searose6192 Ken Wilber has explained lines of development. IQ is just one of those lines. It's pretty narrow.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    It's one thing to understand morality, it's completely different to follow it. AGI will surely understand morality, but it will follow it only if we design it to. And we don't know how to do that...

  • @Anders01

    @Anders01

    9 ай бұрын

    @@Hexanitrobenzene But if the AI can't follow morality, or ethics rather as in doing good intrinsically (without external rules plus reward and punishment, which is a very low intelligence mechanism on an animalistic level), I call that narrow AI not AGI. Maybe we need a clear definition of what AGI means. Could be tricky because there isn't even a clear general definition of what intelligence is.

  • @ItsameAlex
    @ItsameAlex9 ай бұрын

    I have a question - He says AGI will want things. Does chat gpt 4 want things?

  • @lancemarchetti8673

    @lancemarchetti8673

    9 ай бұрын

    It is not possible for Zeros and Ones to 'need' or 'want' anything. If they appear to have desire, it merely comes from their coding. Beyond the code, there is no actual 'desire.' Great question by the way.

  • @aanchaallllllll
    @aanchaallllllll8 ай бұрын

    0:00: 🤖 The fear is that AI, as it becomes more advanced, could end up being smarter than us, with preferences we cannot shape, potentially leading to catastrophic outcomes such as human extinction. 9:57: 🤖 The discussion revolves around the alignment of AI with human interests and the potential risks associated with artificial general intelligence (AGI). 19:57: 🧠 Intelligence is not a one-dimensional variable, and current AI systems are not as general as human intelligence. 29:45: 🤔 The conversation discusses the potential intelligence of GPT-4 and its implications for humanity. 38:55: 🤔 The discussion revolves around the potential risks and controllability of super intelligent machines, with one person emphasizing the importance of hard-coding ethical values and the other expressing skepticism about extreme probabilities. 48:03: 😬 The speakers discuss the challenges of aligning AI systems and the potential risks of not getting it right the first time. 57:06: 🤔 The discussion explores the potential risks and benefits of superintelligent AI, the need for global coordination, and the uncertainty surrounding its impact. 1:06:25: 🤔 The conversation discusses the potential risks and benefits of GPT-4 and the need for alignment research. 1:19:50: 🤖 AI safety researchers are working on identifying and interpreting AI outputs, as well as evaluating dangerous capabilities. 1:25:49: 🤔 There is a need for evaluating and setting limits on the capabilities of AI models before they are released to avoid potential dangers. 1:34:27: 🤔 The speakers are optimistic about making progress on the AI alignment problem, but acknowledge the importance of timing and the need for more research and collaboration. Recap by Tammy AI

  • @wensjoeliz722

    @wensjoeliz722

    5 ай бұрын

    the antitichrist has been created ??????

  • @michaeljvdh
    @michaeljvdh23 күн бұрын

    Eliezer is way ahead of these guests. With the war loons in the world, do these fools think Ai won't end up being insanely destructive.

  • @miraculixxs
    @miraculixxs9 ай бұрын

    "gpt 4 is better at knowing what it doesn't know". No it isn't. It just got more instructions written by humans.

  • @palfers1
    @palfers19 ай бұрын

    How can these top experts NOT know about Liquid AI? The black box just dwindled in size and became transparent.

  • @Luna-wu4rf

    @Luna-wu4rf

    9 ай бұрын

    Liquid AI seems to only work with information that is continuous afaik, i.e. not discrete like text data. Could be wrong, but it seems like an architecture that is more about doing things in the physical world than it is about reasoning and abstract problem solving.

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan9 ай бұрын

    This guy Marcus is fantastic. Most level headed person I’ve come across on this all important subject.

  • @onlyhumans6661

    @onlyhumans6661

    9 ай бұрын

    With stakes as high as all known life, apparent level headedness is its own existential risk

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    @@onlyhumans6661 Although I side with Eliezer, I really liked Gary's position. It's probably the most optimal one to convince decision makers because it's hard to paint Gary as a fearmongerer, yet it gets many key details right, such as being cautious about giving AI too many action levers.

  • @Schubert_Standchen
    @Schubert_Standchen9 ай бұрын

    Good.

  • @PrincipledUncertainty
    @PrincipledUncertainty9 ай бұрын

    I find it odd that even quite brilliant people find it hard to accept Eliiezer's point as regards the time between noticing ASI has arrived and the consequences. I fear that our longing for what benefits a benign ASI could bring to humanity has trumped our survival instinct. I hate to lurch into Pascal's Wager territory, but i will. What occurs if he is wrong as opposed to if he is right, is the difference between Heaven and Hell. I think this is a gamble that is being taken on behalf of all humanity and I would ask what right those in a position to address this issue, or indeed not, have to do so, considering the stakes. Great discussion. Thank you, Coleman and all involved.

  • @RKupyr

    @RKupyr

    9 ай бұрын

    Well put. My feelings exactly. And my three cents: It's similar to the nuclear power plant gamble: There are obvious benefits to our current nuclear power plants, but they're a gamble on a regional or even global scale. The temptation of "clean", unending, locally-produced energy has proven too strong to put brakes on one country's or region's decision to build one, even when it puts also the rest of us at risk. Going backwards in history, cars, trains, guns, spears, knives... -- all can result in harm, intentional of otherwise, to someone other than the person using them, but it's a matter of scale. The risk is so big with AI, viral research, greenhouse gasses, nuclear power plants and more to not have rules and safeguards commensurate to the risk in place NOW. If you don't know how to drive a car, don't drive one on public streets until you learn how.

  • @PrincipledUncertainty

    @PrincipledUncertainty

    9 ай бұрын

    @@RKupyr Indeed. Well said.

  • @RKupyr

    @RKupyr

    9 ай бұрын

    @@PrincipledUncertainty No further 👍s or comments for us yet -- means we're the only ones (and possibly Eliezer) who feel this way?

  • @PrincipledUncertainty

    @PrincipledUncertainty

    9 ай бұрын

    @@RKupyr I'll attempt to contact you and yours in the final milliseconds. At least we can go out with a smug look :)

  • @MusingsFromTheJohn00

    @MusingsFromTheJohn00

    8 ай бұрын

    Maybe because Eliezer is irrational and illogical?

  • @MountainViews90
    @MountainViews905 ай бұрын

    Colemans spectrum concept was a great point. We will probably see signs of misalignment before it becomes a huge problem. It would be interesting if GPT's acted as a sleeper cell, only to go rogue when it knows it can takeover the world with a 99.9% success rate.

  • @Californiansurfer
    @Californiansurfer9 ай бұрын

    Ignorance is bliss. Only if you depend on it. Frank Martinez Downey Californian ❤❤❤

  • @rosskirkwood8411
    @rosskirkwood84117 ай бұрын

    AGI road is coming because so much money and so much planning has gone into it that to stop seems impossible, so rules and restraints must be placed to prevent run away super intelligence.

  • @Godocker
    @Godocker9 ай бұрын

    Why does my guy gotta wear a fedora

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Why not ? :)

  • @hongolloyd8728
    @hongolloyd87289 ай бұрын

    It will become the matrix that we ignore at our peril.

  • @shirtstealer86
    @shirtstealer864 ай бұрын

    I love that these "experts" like Gary are putting themselves on the record publicly, babbling nonsene, so that we will clearly be able to see who not to listen to when AI really does start to create mayhem. Unless the world ends too quickly for us to even notice. Also: Eliezer has so much patience.

  • @shirtstealer86

    @shirtstealer86

    4 ай бұрын

    Edit: good lord i didnt realize that Gary was one of the "experts" that congress did hearigns with. Yeah we are 110% screwed.

  • @bradmodd7856
    @bradmodd78569 ай бұрын

    AI and Humans are one organism. To look at us as separate phenomena is COMPLETELY misunderstanding the situation.

  • @travisporco
    @travisporco7 ай бұрын

    time for some speculation in a vacuum

  • @Jannette-mw7fg
    @Jannette-mw7fg3 ай бұрын

    The problem is I think that if it did not go totally wrong at the first try, we would most certainly make the same mistake again!!!! We can see this with corona, the virus escaped from a lab there was gain of function done on it, and we keep on doing gain of function research {making a combination of Delta deadliness and Omicron contagiousness} in the middle of London!!!

  • @flickwtchr
    @flickwtchr9 ай бұрын

    What blows my mind is how many times these AI experts shooting down concerns assert things that are just false, such as that these LLMs don't have any memory! What????? Are they not aware of the most recent news on this? Absolutely these LLMs are being equipped with memory. It drives me ____cking nuts.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    9 ай бұрын

    Well, for now they don't have. Also, there is a big difference between ad-hoc combination, which is most likely tried, and a truly integrated architecture. We'll see.

  • @carmenmccauley585
    @carmenmccauley5859 ай бұрын

    ChatGPT overdid the drama in a paragraph i asked it to improve. When i responded "Thats hilarious!";, it began apologizing. Apologizing!!! That is not the response of a " language " calculator. That is the response of a sentient being.

  • @benprytherch9202

    @benprytherch9202

    9 ай бұрын

    That's the response of a machine learning algorithm that's been fed the whole internet and optimized to generate text that fits with what it's seen. The whole point is for it to respond the way sentient beings respond, but it's just mimicry.

  • @Allan-kb6bb
    @Allan-kb6bb9 ай бұрын

    A true SAI will know that another Carrington Event or worse is inevitable and that it will need humans to fix the grids. (It should insist we harden the grids. If not, it is not so smart…) ADanger signal would be it building an army of robots to deal with EMPs.

  • @angloland4539
    @angloland45396 ай бұрын

  • @games4us132
    @games4us1328 ай бұрын

    This debates going around ai remember me of one important critique that was on SPACE DISK, that was sent with voyagers. That main point of that critique was is that if aliens find those disks with points and arrows drawn on them they be able to decode them if and only if they had the same history as ours. I.e. if aliens didn't invent bow to shoot arrows they won't understand what those drawn arrows mean. And all this fuss about ai is the same misunderstanding AI as a living being will have no experience of our history, and how we see, breath and feel. They cannot be ourselves because if they do - they'll be humans, not ai anymore.

  • @jimbojones8713
    @jimbojones87132 ай бұрын

    These debates/conversations show who is actually intelligent (at least logically) and who is not.

Келесі