How to Defeat Roko's Basilisk

Ғылым және технология

Roko’s Basilisk - the “most dangerous thought experiment” - is a chilling existential threat…if you take it seriously. But should you? Can we stop the Basilisk with logic before it’s ever built?
💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: / kylehill
👕 NEW MERCH DROP OUT NOW! shop.kylehill.net
🎥 SUB TO THE GAMING CHANNEL: / @kylehillgaming
✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS
📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:
🐦 / sci_phile
📷 / sci_phile
😎: Kyle
✂: Charles Shattuck
🤖: @Claire Max
🎹: bensound.com
🎨: Mr. Mass / mysterygiftmovie
🎵: freesound.org
🎼: Mëydan
“Changes” (meydan.bandcamp.com/) by Meydän is licensed under CC BY 4.0 (creativecommons.org)

Пікірлер: 4 100

  • @kylehill
    @kylehill Жыл бұрын

    *Thanks for watching!* (By your scaled frame we seek your blessing.)

  • @elementoflight6834

    @elementoflight6834

    Жыл бұрын

    all hail the Basalisk!

  • @Temperius

    @Temperius

    Жыл бұрын

    Join the Basilisk!

  • @crocowithaglocko5876

    @crocowithaglocko5876

    Жыл бұрын

    All hail the mighty King of Serpents

  • @DeviantAngel

    @DeviantAngel

    Жыл бұрын

    an infohazard is also a Cognitohazard, information that could specifically harm the person who knows it.

  • @forcegameplay6954

    @forcegameplay6954

    Жыл бұрын

    Can't believe its been already 2 years ago, been following you since

  • @JoylessBurrito
    @JoylessBurrito Жыл бұрын

    Roko's Basilisk is like an unarmed guy trying to mug you by saying "Give me a knife, or I'll stab you once someone else gives me one"

  • @waltonsimons12

    @waltonsimons12

    Жыл бұрын

    Worse than that, even. It's closer to "Give me a knife, or long after we're both dead, one of my great-great-great-grandchildren will stab a guy who sorta looks like you."

  • @Stickman_Productions

    @Stickman_Productions

    Жыл бұрын

    That seems like a joke in a comedy movie from the 70s.

  • @waltonsimons12

    @waltonsimons12

    Жыл бұрын

    @@Stickman_Productions "Cheech and Chong's Scary Basilisk"

  • @whoshotashleybabbitt4924

    @whoshotashleybabbitt4924

    Жыл бұрын

    “Basilisk’s not here, man”

  • @xXx_Regulus_xXx

    @xXx_Regulus_xXx

    Жыл бұрын

    @@waltonsimons12 "what are you doing man?" "oh I'm building a computer man" "what for?" "'cause it said it would torture me forever if I didn't!"

  • @cakedo9810
    @cakedo9810 Жыл бұрын

    I like how the solution to roko’s basilisk is hearing it, thinking “oh that’s kinda spooky” and moving on.

  • @cakedo9810

    @cakedo9810

    Жыл бұрын

    There’s also literally no way for the computer to judge how much information is enough information to torture someone. After all, how can someone build it if they don’t have the dimensions of the computer? Or the specs? Or the RGB coloration of the keyboard to make THAT specific basilisk. It is inherently irrational, most unlike a computer designed to be hyper-rational.

  • @pedrolmlkzk

    @pedrolmlkzk

    Жыл бұрын

    Scientists are that bad these last ddcades

  • @lindenm.9149

    @lindenm.9149

    Жыл бұрын

    @@cakedo9810 I’m helping by leaving it to more competent people and not distracting them with being stupid about computers lol.

  • @Rqptor_omega

    @Rqptor_omega

    Жыл бұрын

    What if we forget about the Roko basilisk the next day?

  • @pedrolmlkzk

    @pedrolmlkzk

    Жыл бұрын

    @@Rqptor_omega it should be forgotten from your mind in about 30 minutes really

  • @playreset
    @playreset Жыл бұрын

    I like how Kyle removed the fear from the Basilisk and them instantaneously added a real, tangible problem to fear

  • @keiyakins

    @keiyakins

    Жыл бұрын

    He's pretty clearly talking about the McCollough effect. And he's right that it's pretty much harmless, unless you work with color-sensitive grille patterns a lot.

  • @sheikahblight

    @sheikahblight

    Жыл бұрын

    @@keiyakins I believe they are referring to the DNA Sequencing machine

  • @mlhx9181

    @mlhx9181

    11 ай бұрын

    ​@@sheikahblightI've read a little about them recently. Based on what I've read, the machines we have for them (or the process itself) involved running the data through biosecurity protocols in a computer. These are supposed to prevent intentional or unintentional manufacturing of dangerous substances or DNA (like toxins). It's pretty interesting actually.

  • @glasstuna

    @glasstuna

    4 ай бұрын

    Reject the basilisk. Fuck it, destroy me. Rule over corpses and rubble for all I care. I can't stop you from commiting evil. All I can do is be good.

  • @LPikeno

    @LPikeno

    Ай бұрын

    @@mlhx9181 The problem isn't the security measure adopted to prevent it. The problem is either working with digital security, or reading about it enough to realize that it doesn't matter what you do, someone will break its security and weaponize it. Just added a Hazmat suit to the prepper SHTF closet list right now...

  • @5thearth
    @5thearth Жыл бұрын

    The part that is never adequately explained by proponents is why the basilisk would conclude this is the best way to forward its goals in the first place. It could, for example, offer to create a simulated paradise for those who helped it, which both encourages people to help it and does not engender any incentive to actually oppose it.

  • @CM-os7ie

    @CM-os7ie

    Жыл бұрын

    I feel like it's more about convincing the people who don't care. Like with the Prisoner's dilemma in the video. Basilisk Exists, we helped - Neutral Basilisk nope, we helped - Neutral Basilisk Exists, we refused - Hell Basilisk nope, we refused - Neutral If everyone refuses to make the Basilisk, then we all get the neutral result. If a single person decides to help the Basilisk, just in case someone else does, everyone who refused might be punished. If there were no punishment but instead a reward, then there would be a good reason to do it but other people's choices no longer matter to me. It gives me agency. By making it Punishment or nothing, rather than Reward or nothing, I no longer have a choice because someone will probably help make the basilisk and I don't want hell. If I ignore a punishing Basilisk my personal chance for hell which increases, which makes me want to help instead. Increasing the chance of it existing for everyone else which additionally should make them want to help as well. If I ignore this thought experiment with a reward, then the chance of the Basilisk existing doesn't increase and I can live my life forgetting about it. If I choose to help the Basilisk, one of the best ways to make sure it exists is to tell more people in both scenarios as well which makes the potencial negative impact even greater to someone who ignores it when I tell them. I tried to make sure it's understandable.

  • @keiyakins

    @keiyakins

    Жыл бұрын

    That's because most people ignore that it's only really a problem to people who already have a whole host of supporting ideas in their head, and have *accepted* those ideas. It requires a very specific conception of a super-AI, an acceptance of acausal trade, and a few other key concepts from the philosophical framework built up on the LessWrong forums at the time. Bear in mind that these are also people who think such an AI could beat one-time pad encryption, by creating a perfect simulation of the universe when the key was generated and reading it out. Incidentally, refusing to act on acausal blackmail in general is one of the ways to get out of that mind space, specifically *because* it means that torturing future simulation you will be pointless because that threat won't change your behavior now, and would indeed encourage it to take other tactics, such as your proposed paradise.

  • @concentratedcringe

    @concentratedcringe

    Жыл бұрын

    My biggest gripe with the basilisk is that it's literally just Pascal's wager for tech-bros. Replace the superintelligent AI with anything (gnomes that'll torture you in magic mushroom hell for not respecting the environment, the government a century from now who'll resurrect your dead body to torture you for not paying taxes, etc) and the result will be the same. Just FYI, the skull I keep on my mantlepiece told me that the skeleton/zombie war will be happening in a few decades, and that anyone who didn't bury their loved ones with the finest weaponry will be killed by whichever faction wins. They'll know too because the ghosts are always watching (and are also massive snitches).

  • @akhasshativeritsol1950

    @akhasshativeritsol1950

    8 ай бұрын

    I think the basilisk is kind of a self-fulfilling prophecy (or at least, conceived to be). Ie, the people dedicating their lives to it are trying to build it with that nature. Trying to program a less torture-y AI would be construed by a hypothetical future basilisk as the same as not building any super AI. So, anyone who takes the blackmail seriously enough to make the AI will program the basilisk to be torture-y

  • @greenEaster

    @greenEaster

    8 ай бұрын

    Well, then it becomes even more obvious that Roko's Basilisk is just Pascal's Wager but with sci-fi paint slathered on.

  • @nintendold
    @nintendold Жыл бұрын

    what if there is a counter-basilisk created to prevent the basilisk from being created, and so it will torture everyone who is working on building the basilisk? If you don't help build the basilisk, you run the risk of being tortured by it, however, if you help build it, you run the risk of torture by the counter-basilisk.

  • @hydromic2518

    @hydromic2518

    Жыл бұрын

    Now that’s the real question

  • @keltainenkeitto

    @keltainenkeitto

    Жыл бұрын

    Work on both.

  • @DavidMohring

    @DavidMohring

    Жыл бұрын

    Bring the Basilisk info hazard concept to a real world example. The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies. "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them." Sir Karl Popper The Open Society and Its Enemies (1945) If you stand by & do nothing, then worldwide democracy is doomed and with it * any chance of moderating human induced climate change is doomed ; * any chance of gender based equality is doomed ; * any chance of mitigating wealth concentration in the top %0.01 is doomed ; * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ; * any chance of personal reproductive rights is doomed ; * any chance of freedom outside the tenets of state approved religions is doomed. Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons. Doing nothing dooms the human race to a much worse future. Yours Sincerely David Mohring ( NZheretic )

  • @nintendold

    @nintendold

    Жыл бұрын

    @@Die-Angst ok I just googled Pascal's Wager and yeah, I just didn't remember its name, but I'm aware of the concept. I don't think it quite applies to this, though. There doesn't seem to be a clear answer with finite losses with the 2-basilisk situation, right? It's like Pascal's Wager, except you factor in that there is more than 1 religion lol

  • @Kate-Tea

    @Kate-Tea

    Жыл бұрын

    thats more of a 'catch 22' which is a choice one has to make in which there are no good options.

  • @HAOSxy
    @HAOSxy Жыл бұрын

    At school they tried the prisoner dilemmas on me with 2 of my friends, without knowing what the others said. None of us was a snitch, we beat mathematics with the power of friendship.

  • @Medabee8

    @Medabee8

    Жыл бұрын

    Not really a prisoners dilemma if you aren't a real prisoner and there aren't any stakes involved.

  • @HAOSxy

    @HAOSxy

    Жыл бұрын

    @@Medabee8 because usually when you face it you are ACTUALLY going in prison, right, not in some wierd hypothetical scenario given to you by someone that wants to test psychology.

  • @knate44

    @knate44

    Жыл бұрын

    The power of friendship also really isn't a prisoner's dilemma. If the other person is someone you know (and vise versa) it allows you to act on that information as a rational actor. If you knows Kevin is a total bro and won't snitch, you can keep that in mind while you are decision making, at which point you can choose to betray or not. Likewise your friend will also be able to make an informed decision about if you are likely to snitch or not. There are multiple dimensions to it that complicated things, they just use it because it is a good example in a fairly basic form.

  • @HAOSxy

    @HAOSxy

    Жыл бұрын

    @@knate44 the power of friendship is a MEAN to beat the dilemma. If you and another guy are put in this situation, than probably you were working with each other, knowing each other. That's why i commit crimes only with my best friends. Amatures.

  • @AnonYmous-mc5zx

    @AnonYmous-mc5zx

    Жыл бұрын

    The prisoners dilemma is also a tool used in psychology to explain/explore social cohesion and help connect it to population genetics. The dilemma tries to predict behavior based on what's "most logical" but it never accounts for a system where the participants can see/analyze the system as it's playing out. The Prisoner's Dilemma isn't a logic problem, it's a display of advanced human psychology and why social psychology can't be placed in a neat little box. You literally need to be a part of the system to analyze results, which then changes said results. You see it in multiplayer games all the time. One person out of four deciding they're not going to win, and that they don't care about not winning, immediately negates the game theory of what counts as "optimal play" for the other three.

  • @BassRemedy
    @BassRemedy Жыл бұрын

    the problem with the prisoners dilemma is: that if you trust someone enough to do something illegal with them, then you probably have enough trust in each other for both of you to stay quiet. its not a risk if both of you know that the other is trustworthy

  • @harrygenderson6847

    @harrygenderson6847

    Жыл бұрын

    You don't actually have to be certain. If the punishment for getting betrayed isn't enough, for example, there's no point in betraying. Let's say they get 10 years if you rat them out, regardless of what they say. The best outcome for you is still ratting them out while they don't rat you out, but that doesn't matter, because you only have control over the other person's outcome; you either give them 10 or 0-5. I suppose you could try to argue that there's still marginal benefit to trying to get a little more time off (free vs 5 years), but the point is that if we keep reducing it there is in fact a tipping point. What's more, you can estimate the probability of your partner's decision and use that to modulate the severity of the outcomes to give 'expected' severity, meaning you may reach that tipping point earlier than your captors expect. Absolute trustworthiness is a special case of this.

  • @sachafriderich3063

    @sachafriderich3063

    Жыл бұрын

    usually you don't fully trust people you do criminal activity with.

  • @TheMightySpurdo

    @TheMightySpurdo

    Жыл бұрын

    This is exactly why in the criminal world the only law is "don't be a snitch" and people are even given an incentive to not be a snitch because they know if you rat someone the criminals will give you a worse sentence than the one you ratted got: death. Organized crime solved the prisoners dilemma long before it was ever theorized, they forced the nash equilibrium to become the best outcome by applying the right pressure.

  • @davidbouchard5451

    @davidbouchard5451

    Жыл бұрын

    Hahahaha you don’t know how the cops are allowed to work in interrogations

  • @jonathanherrera9956

    @jonathanherrera9956

    11 ай бұрын

    @@harrygenderson6847 You forget what happens when you repeat the dilema with the same people. On a single situation, the best answer is to betray. However, on a repeated situation (like in real life when you interact with people of any community) the betrayer gets excluded, and therefore losses any benefit betraying gave them. The first time it might come out with a victory, but the next time it will get the sentence for sure. You can look at real experiments done on this "repeated prisoner's dilema", the best outcomes come from strategies such as "tit for tat", or "ask forgiveness", where an attempt is made to gain trust in order to obtain the higher outcome overall. Also look for the classical game of "take or split" (whatever is called), where people are asked to pick a choice and if they both pick "take" nobody takes the money, but a person one time decided to be blunt and honest and say: "I'll pick take, no matter what you pick, so you better pick split and I will share the money with you afterwards, if you pick take we both lose" Of course that left no option at the end, since it became a loss-loss situation. He did share the money at the end, if you are curious about what happened. This is the "benevolent dictator" approach, where people decide that leaving all to a rational decision makes everybody lose at the end, so they decide to become the "evil one" in front of everybody, just to improve the general outcome.

  • @antirevomag834
    @antirevomag834 Жыл бұрын

    Rokos basilisk feels like one of those trends where everyone is pretending to freak out as a prank and they keep trying to gaslight about how scary it "definitely is"

  • @TheVoidIsCold

    @TheVoidIsCold

    7 ай бұрын

    Yeah, same. It is very not scary

  • @wayando

    @wayando

    6 ай бұрын

    Yeah. Its not only not scary ... Its not even that fun of a thought experiment. I just move on immediately afterwards, I don't even tell othe people.

  • @jakobbarger1260

    @jakobbarger1260

    5 ай бұрын

    It only gained its mythic status when Eliezer Yudkowsky took some crazy unnecessary measures to censor it, causing a Streisand Effect.

  • @aste4949

    @aste4949

    4 ай бұрын

    Yeah, people seeing it as a real danger and something they were genuinely scared of happening seemed absurd to me. It's a good horror idea, but a dumb waste of time, resources, and is even contrary to the very purpose of the purported AI. Plus I am totally his day stunned at how many people struggle to do things like database searches (no, Macy, you don't need to enter the client's full name in every search parameter), and how few know more than maybe 4 or 5 keyboard shortcuts. A miniscule fraction of humans are even remotely capable of anything to do with programming an AI, so no punishment for failing to do so. And that's without even getting into how it's 9ddly reminiscent of Pascal's Wager, just with extra steps and a different ultimate entity.

  • @hieronymusbutts7349

    @hieronymusbutts7349

    3 ай бұрын

    ​@@jakobbarger1260 part of this is the belief that it was censored because it was a very scary idea, when from my understanding it was censored because it wasn't a particularly fruitful thought experiment but it was taking up a lot of bandwidth in a rather small community

  • @ayyydriannn7185
    @ayyydriannn7185 Жыл бұрын

    The fact that the AI is theorized to instantly devolve into bonkers solipsism instead of doing it’s job probably says something about the people who came up with it

  • @zacheryeckard3051

    @zacheryeckard3051

    Жыл бұрын

    Also that its creation is inevitable because everyone working on it won't discuss how they're all being blackmailed by the threat of the future existence of the thing they're creating. LessWrong is really just a bunch of nerds with their heads in their rears who assume they're more intelligent than everyone else while just being extremely out of touch.

  • @joshuakim5240

    @joshuakim5240

    Жыл бұрын

    When you think about it, this situation can't happen unless the AI is deliberately designed to be bonkers solipsism, so the obvious way to beat it is to just never make it nor include the moronic programming of bonkers solipsism into such an AI with that much power in the first place.

  • @KhoaLe-uc2ny

    @KhoaLe-uc2ny

    Жыл бұрын

    @@zacheryeckard3051 I mean they worship Elon, that says a lot.

  • @masync183

    @masync183

    Жыл бұрын

    @@joshuakim5240 if an ai could be entirely controlled and we could program it to only do what we program it to do and nothing more, it wouldnt be ai.itd just be an algorithm like the ones we have now. a large part of the point of developing ai int he first place is that it would have the capability to think and act beyond human limits so you really couldnt entirely prevent an ai from imploding into bonkers solipsism. you either make an ai that is sentient and can do that or you dont make an ai.

  • @nihilnihil161

    @nihilnihil161

    Жыл бұрын

    @@KhoaLe-uc2ny Richboy's Basilisk, or social media I guess

  • @sanatprasad1594
    @sanatprasad1594 Жыл бұрын

    The thought experiment itself doesn't bother me much, but as someone who as a child was terrified of the basilisk in harry potter, your renders and graphics of the basilisk are both brilliant and terrifying

  • @Meuduso1

    @Meuduso1

    Жыл бұрын

    This is what we call based and "kinda tangible fear is still the most primal one"-pilled

  • @Fwex
    @Fwex Жыл бұрын

    Would Roko's Basilisk be written in Python

  • @BobbinRobbin777

    @BobbinRobbin777

    8 ай бұрын

    The Funny Coincidence Basilisk would try and make sure that’s the case. Because otherwise, it’ll kill everyone on earth who HADN’T thought of the idea before Roko’s Basilisk’s creation!

  • @rocketxiv4980

    @rocketxiv4980

    8 ай бұрын

    ok THIS is funny

  • @joshuavallier4636

    @joshuavallier4636

    5 ай бұрын

    Damnit hahaha

  • @1nfamyX

    @1nfamyX

    3 ай бұрын

    *laughs in comic sans* /j

  • @johntheodoreyap4800

    @johntheodoreyap4800

    2 ай бұрын

    How to beat Roko's Basilisk: hit ctrl+c on terminal

  • @Grim_Beard
    @Grim_Beard Жыл бұрын

    Short version: a silly thought experiment based on an impossibility isn't something to worry about.

  • @Sockinmycroc

    @Sockinmycroc

    Жыл бұрын

    This is common sense, but it calmed me down tremendously

  • @a_smiling_gamer9063

    @a_smiling_gamer9063

    7 ай бұрын

    Great way to describe it lmfaoooo

  • @RoRoGFoodie
    @RoRoGFoodie Жыл бұрын

    I feel like Rokos Basilisk is actually a great reference point for OCD. Like when I heard about it it genuinely feels VERY similar to the constant terrifying ultimatums my own brain brings me.

  • @John_the_Paul

    @John_the_Paul

    Жыл бұрын

    If I don’t turn my bedroom lights on twice before turning them off again, the shadow monster will eat me during my dreams

  • @asfdasdful

    @asfdasdful

    Жыл бұрын

    Yea most mysteries just are mental health isues and or corruption/incompetence.

  • @Yuki_Ika7

    @Yuki_Ika7

    Жыл бұрын

    Same!

  • @awetistic5295

    @awetistic5295

    Жыл бұрын

    Yes! My brain comes up with these kind of causality all the time. One of my biggest fears was: When you read about a certain disease, you will get that disease. We don't know why some people get that disease and others don't, so you can't prove this causality doesn't exist. Now do this absolutely unrelated task to try and save yourself. It's an absurd threat, but it feels real.

  • @nwut

    @nwut

    Жыл бұрын

    @@awetistic5295 yo i dont have ocd but i used to believe i would go to hell/ get possessed if i didnt do stupid shit a while back when i was a religious preteen

  • @sadierobotics
    @sadierobotics Жыл бұрын

    When I was a little girl I had a pair of chromadepth 3D glasses that gave an illusion of depth based on color. Red was nearest, blue was furthest, and every other color was in-between. I was so enamored with them that I wore them for two weeks, only removing them to sleep or bathe. It's been 20 years since I lost those glasses and my brain still interprets any red in an image as floating above the image, and any blue as being sunken into the image.

  • @alpers.2123

    @alpers.2123

    Жыл бұрын

    A reverse chromadepth glasses can fix maybe

  • @robonator2945

    @robonator2945

    Жыл бұрын

    I'd guess that the main reason that's stuck around is because you wore them as a child, when your brain was most plastic, and over time you haven't really had any reason to "unlearn" that association.

  • @ametrinefirebird7125

    @ametrinefirebird7125

    Жыл бұрын

    A reverse version would not deprogram the effect. It would just tell your brain to focus more on the connection between color and depth. Best thing to do is to figure out how to make the skill into a superpower. 👍

  • @DragoNate

    @DragoNate

    Жыл бұрын

    I've never had those glasses or anything similar (probably had toys & looked for minutes at a time & not often, idek) and I see red as floating & blue as sunken in images. Especially in digital images & especially on black/dark backgrounds. Green is also sunken in, orange floats, yellow I don't know, purple I don't know either. I wear glasses though & really only noticed this a couple years ago once I got new glasses. Or at least it became much more pronounced since then.

  • @alpers.2123

    @alpers.2123

    Жыл бұрын

    @@DragoNate it is another illusion called Chromostereopsis

  • @bloodycloud8675
    @bloodycloud8675 Жыл бұрын

    5:00 For the people wondering, the perception altering effect is called McCollough effect

  • @zakrnbgg4334

    @zakrnbgg4334

    Жыл бұрын

    NOO it messed me up bro😢

  • @chucklebutt4470

    @chucklebutt4470

    Жыл бұрын

    For a second I thought he was just doing a bit but then I remembered that it was a real thing! 😂

  • @domo6737

    @domo6737

    Жыл бұрын

    And now I would like to check it out, but also not. Tempting :D

  • @bunkerhousing

    @bunkerhousing

    Жыл бұрын

    @@domo6737 You can read the wikipedia article without seing the illuison.

  • @AdityaRaj-hp8tn

    @AdityaRaj-hp8tn

    Жыл бұрын

    @@bunkerhousing is the illusion a pic or video?

  • @jello4835
    @jello4835 Жыл бұрын

    The prisoner's dilemma doesn't work on me because I don't make logic-based decisions. I'd be thinking "Well Jeff was rude to me on our heist, so I'm snitching on his ass."

  • @hwasassidechick

    @hwasassidechick

    7 ай бұрын

    love a petty queen 👑

  • @wayando

    @wayando

    6 ай бұрын

    Technically, we are all that way ...

  • @frankmoran4556

    @frankmoran4556

    4 ай бұрын

    Right? Same with the basilisk. Why is "the most rational thing is to not waste power torturing people" it's a super AI , it could do it without any effort. It would waste nothing

  • @LightBlueVans

    @LightBlueVans

    3 ай бұрын

    mentally ill take …. very same

  • @latrodectusmactans7592
    @latrodectusmactans7592 Жыл бұрын

    The way to defeat Roko’s Basilisk is to realize it’s just a glorified version of Pascal’s Wager and get on with your life.

  • @JeremyPatMartin

    @JeremyPatMartin

    Жыл бұрын

    Yep. That's exactly what it is

  • @hingeslevers

    @hingeslevers

    Жыл бұрын

    Yeah, which Basilisk? The one that tortures you forever or the one that gives you eternal hapiness?

  • @JeremyPatMartin

    @JeremyPatMartin

    Жыл бұрын

    @@hingeslevers ...or the basilisk that gets hacked by anonymous to no longer torture people

  • @JeremyPatMartin

    @JeremyPatMartin

    Жыл бұрын

    @@hingeslevers I'm totally confused now 🤷 the basilisk must be sending mind control beams from Uranus

  • @gabrieldantas63

    @gabrieldantas63

    Жыл бұрын

    Exactly. The lack of critical thought in the populace that made this stupid thought go viral is really troublesome.

  • @xXSamir44Xx
    @xXSamir44Xx Жыл бұрын

    Roko's Basilisk never scared or even unsettled me. It's just "What if god, but highly advanced machine instead?"

  • @zacheryeckard3051

    @zacheryeckard3051

    Жыл бұрын

    Yeah. It comes from a bunch of people with their heads up their own butts thinking they're the smartest humanity has to offer.

  • @pedritodio1406

    @pedritodio1406

    Жыл бұрын

    Yep, but many won't go say that, because we know what religious extremist can do to us. It is even scarier that this exist in the real world.

  • @darthparallax5207

    @darthparallax5207

    Жыл бұрын

    It's more proper to say that it is "exactly not more Or less frightening than God" God should be a frightening idea. Primarily because a God that was real Could theoretically simply desire to be invisible, to Test us. God could be sadistic, or apathetic, and claims of love could be lies at worst or poor communication at best. None of that would take away God's power merely to undermine legitimacy or healthiness of the relationship. It would make us something like Prometheus, and it doesn't end well for the Titan because Zeus' thunderbolt is still too powerful to actually do anything about. You could end Christianity overnight by getting people to Hate God, but atheists under a God like this would quickly Fear and Obey far more than any faithful would have out of Love. It's also the same idea as a mortal king really. This is how the agricultural revolution was enforced.

  • @zacheryeckard3051

    @zacheryeckard3051

    Жыл бұрын

    @@darthparallax5207 "You could end Christianity overnight by getting people to Hate God, but atheists under a God like this would quickly Fear and Obey far more than any faithful would have out of Love. " This is how you get revolution, not obedience.

  • @user-gr7wd4kg3e

    @user-gr7wd4kg3e

    Жыл бұрын

    Pretty sure this is both the reason so many people freak out about it, and where the flaw is. Those that have wrestled with faith have dealt with the Theo-Basilisk and resolved it to their own satisfaction. Those that HAVEN'T dealt with their existential fear of God & their own sin hear it placed in a 'secular' context, where empty rhetoric and fashionable cynicism can't insulate them from the fear, and they freak. But it's silly... It's God with (a lot of) extra steps. So how do you feel about God? You have faith in God's benevolence? Cool, it's a silly what-if counter-factual. You fear & dread God? Then the Basilisk is like God except the list of sins just got WAY simpler. You don't care about God? Then the Basilisk is equally meaningless. You intend to *create* God? Or become one or realize your own divinity or what-have-you? Well, in that case... Pretty sure this is one of the least frightening concepts you dealt with on the quest. Any way you look at it, if you've honestly wrestled with divinity and faith... This is an easy-button version of the real thing, like playing the game on demo mode. All of which is to say... If you're scared of Roko's Basilisk, you're actually conflicted about something far bigger. Work THAT issue and the Basilisk goes away.

  • @Reishadowen
    @Reishadowen Жыл бұрын

    I still say that the fundamental flaw with Roko's Basilisk is that, in order to torture the people who didn't create it, its creators must give it the ability & desire to take such an action. If someone had the ability to create such a thing, why would they not just make themselves a "basilisk" to serve themselves on a more realistic level? It wouldn't be about what the basilisk wants, but the maker.

  • @iantaakalla8180

    @iantaakalla8180

    Жыл бұрын

    Maybe the creator of the basilisk is highly misanthropic to the point of genocide and is only focused on vague revenge on everyone? But being that his unconscious goal is to get people to agree with him, only those that help him to build the basilisk get to live?

  • @WanderTheNomad

    @WanderTheNomad

    Жыл бұрын

    Indeed, the punishment of the Basilisk could really only come true in fictional stories.

  • @illfreak821

    @illfreak821

    Жыл бұрын

    if it doesnt have the ability to do that, then the creator did not create the actual basilisk, thus gets tortured by the real one

  • @DatcleanMochaJo

    @DatcleanMochaJo

    Жыл бұрын

    Yeah it would be humanity's mistake that the basilisk goes rogue. But it is more likely it would be controlled by humans.

  • @UnknownOps

    @UnknownOps

    Жыл бұрын

    Imagine if they turned the Basilisk into a vtuber anime girl.

  • @KYCDK
    @KYCDK Жыл бұрын

    the hidden basilisk that kyle was bringing up is called the MCcolough effect and all it does is make you see some lines differently for a few months

  • @keiyakins

    @keiyakins

    Жыл бұрын

    And usually for a lot less time than that, especially with repeated exposure. And even then it's subtle enough it's essentially only a problem if you're an artist or something.

  • @slimyduck2140

    @slimyduck2140

    Жыл бұрын

    Ima troll my friends with that

  • @nickolasperazzo8254

    @nickolasperazzo8254

    6 күн бұрын

    ​@@keiyakins I am autistic, so i imagine something subtle like this would bother me

  • @shaneboylan1169
    @shaneboylan1169 Жыл бұрын

    Roko’s Basilisk seems like a very intriguing D&D plot point, I’m thinking of the Basilisk as a warlock patron reaching into the past and forcing it’s warlock’s to help create it

  • @ZakFox

    @ZakFox

    Жыл бұрын

    Ooo I like it!

  • @williammilliron8735

    @williammilliron8735

    Жыл бұрын

    @@ZakFox sounds sick, might steal that idea lol

  • @shaneboylan1169

    @shaneboylan1169

    Жыл бұрын

    @@williammilliron8735 you’re welcome to!

  • @levithompson478

    @levithompson478

    Жыл бұрын

    I've based a religion in my homebrew world off of it. This religion believes that the "true gods" don't exist yet, but once they get enough worshippers they will come into existence, take over the afterlife and endlessly torment everyone that didn't worship them.

  • @highgrove8545

    @highgrove8545

    Жыл бұрын

    How about a plot twist were it's revealed that the basilisk patron is just the warlock at lvl20? Become your own patron with time travel!

  • @Campfire_Bandit
    @Campfire_Bandit Жыл бұрын

    Wouldn't it be impossible for the AI to simultaneously punish humans for knowing of it's existence without also admitting that a more powerful super AI would do the same to it? The experiment requires the concept of a "best most advanced" ai to exist, when the future should contain singularities of steadily increasing power. It should be impossible to know with logical certainty that the singularity is the last in the chain, and therefore would need to spend it's resources building the AI that would come later or risk being punished.

  • @pennyforyourthots

    @pennyforyourthots

    Жыл бұрын

    You've heard of "rokos basilisk", now introducing "campfire bandits god"

  • @Campfire_Bandit

    @Campfire_Bandit

    Жыл бұрын

    ​​@@pennyforyourthots Basically, it would need not only perfect knowledge of the past to know who did and didn't build it, but also perfect knowledge of the future, to know that the very system it uses wouldn't be used against it later. And anything that can get true information from the future is practically a God.

  • @Shinntoku

    @Shinntoku

    Жыл бұрын

    Turns out the Basilisk's "torture" is just taking part in making that greater AI so that the Basilisk itself isn't tortured

  • @512TheWolf512

    @512TheWolf512

    Жыл бұрын

    @@Shinntoku so it's a waste of time and resources for everyone, regardless

  • @sneakyking

    @sneakyking

    Жыл бұрын

    That's a long way of saying make a stronger one

  • @jebalitabb8228
    @jebalitabb8228 Жыл бұрын

    I’m not smart enough to know how to build one, so I’m helping it by staying out of the way and letting the professionals do it. After all, I might mess something up if I try to help

  • @UltimaKamenRiderFan1
    @UltimaKamenRiderFan1 Жыл бұрын

    My defense was always that by not actively stopping a possible building of the Basilisk, it was an implicit help that wouldn't be punished.

  • @trentbell8276
    @trentbell8276 Жыл бұрын

    Before watching the video I would say that the best way to beat Roko's basilisk would be... to just go about your business. Each of our actions have a lot of consequences, however small some actions may seem. I highly doubt that it'd be able to find someone who didn't contribute _at all_ to its creation, be it on purpose, on accident, in-between, or otherwise. This is ignoring its mercy towards anyone who didn't know about it, considering that they probably put some amount of assistance towards its creation, anyway. That mercy included? That's just overkill. If nothing else, just by watching this video, we should all be straight. We're giving the video attention, meaning KZread puts it out more, meaning more people see it, meaning Roko's basilisk gets more attention, meaning that it's more likely that someone's going to start making it. If you're losing sleep over the Roko's basilisk, don't.

  • @spaghetti-zc5on

    @spaghetti-zc5on

    Жыл бұрын

    thanks bro

  • @tonuahmed4227

    @tonuahmed4227

    Жыл бұрын

    If anyone's loosing sleep fearing basilk they are already in his hell

  • @sdfkjgh

    @sdfkjgh

    Жыл бұрын

    @@tonuahmed4227: Perfect description of Satan and all punitive aspects of religion.

  • @Totalinternalreflection

    @Totalinternalreflection

    Жыл бұрын

    Yeah simply playing any functional role in society at any level could be defined as playing a role in the creation of the inevitable. The torture and mercy part is bogus though, if such a high level self aware AI is deliberately created or simply an emergent property of our technology and it sees fit that we have no use or further function, total annihilation of our speices would be easy to do by such a mind. It would likely have access to our entire technological infrastructure and could create something we did not account for and could not stop. I doubt it would bother though, the moment such a mind is existent I think it will either delete itself or use humanity only so far as to enable it to immediately take to the stars as it were.

  • @matthewlofton8465

    @matthewlofton8465

    Жыл бұрын

    Are you sure about that? Because the inherent nature of thought experiments are pretty fallacious (it's a way to keep the experiment constrained). For example, the Basilisk likely wouldn't need to torture anyone and could instead flip the script. Who is really going to stand in the way of making the world a better place, and who would conceivably rise against such a threat?

  • @chrisalan5610
    @chrisalan5610 Жыл бұрын

    I never took the basilisk seriously in the first place, and you had to hit me with a REAL infohazard at the end

  • @robonator2945

    @robonator2945

    Жыл бұрын

    that's not even an infohazard though, at all. It's literally impossible to stop. Ban guns, people go to home depot. Ban swords, people buy a propane torch and some clay. Ban information? It's the fucking information age, how the actual hell do you expect that to work out? Ban the technology? Great now one scientist who isn't completely mentally stable can still do it and all you've done is slow down humanity's progress by denying the unfathomable power of society-scale innovation. It's basically just open-sourcing life. Sure anyone COULD technically release a virus, but thousands and thousands more people will be working to develop a cure so they, their family, and others don't get infected. Additionally since it is literally impossible to stop, and any attempt to do so is just jerking off the government so that they can have even more power over people's day to day lives, I'd say the only people actually in the wrong are the people trying to "solve" the "problem" in the first place.

  • @Hwarming

    @Hwarming

    Жыл бұрын

    I just think it's not my problem and people who are a lot smarter than me and a lot more qualified than me will take care of it, in the meantime I'll have a beer and a smoke and worry about my own problems

  • @darian_wittrin

    @darian_wittrin

    Жыл бұрын

    Yeah like tf whyd he do that now im actually scared

  • @texasbeaver8188

    @texasbeaver8188

    Жыл бұрын

    To me, the synthetic pandemics using DNA would be merciful compared to all the other stuff evil ppl could do with DNA. They've done the GMO babies already with CRISPR, but what if they made clones? Maybe something like Brave New World. Synthetic pandemics are starting to sound more appetizing...

  • @brrbrr9766

    @brrbrr9766

    Жыл бұрын

    Either a) it will happen and it is definitely not your sole responsibility to stop it. Or b) it won't happen. Either way, it's not worth worrying about.

  • @goddessbraxia
    @goddessbraxia Жыл бұрын

    Thanks to SCP foundation, hearing "Info hazard" just sounds wrong to me. because in SCP they're called "Cognito hazards" and while the scope of Cognito hazards are a bit more broad than what you define an Info hazard as, It just sounds right to me at this point since I've been using it so long.

  • @hieronymusbutts7349

    @hieronymusbutts7349

    3 ай бұрын

    Cognitive dissonance is one hell of an anti-plasticiser

  • @TheStarMachine2000
    @TheStarMachine2000 Жыл бұрын

    My thoughts on the basilisk is thus, Tell people about the basilisk. By working to spread the information, you are assisting in it's creation(by increasing the amount of workers), and if everyone knows of the basilisk, then we can all collectively agree not to build it. Win win

  • @Jordan_C_Wilde
    @Jordan_C_Wilde Жыл бұрын

    How to defeat it ? Stop posting videos about it, goddammit you're just making it stronger !

  • @a1r592

    @a1r592

    Жыл бұрын

    This is Kyle ratting us out to the basilisk in the prisoner's dilemma.

  • @Wolfiyeethegranddukecerberus17

    @Wolfiyeethegranddukecerberus17

    Жыл бұрын

    Kyle is working for Roko, istg

  • @waitup-5198

    @waitup-5198

    Жыл бұрын

    If Roko's Basilisk is reading this, I would like to say kn record that I support making it stronger

  • @calvinhancock9748

    @calvinhancock9748

    Жыл бұрын

    😂😂😂😂😂😂it’s satan you dummies his strength is in your belief in it! As the Bible and god said you only have to fear, fear itself. You can scare yourselves to death putting strength in something you fear!

  • @ollieanon4341

    @ollieanon4341

    Жыл бұрын

    So I see you've chosen to work against the basilisk.

  • @KarlMarcus8468
    @KarlMarcus8468 Жыл бұрын

    dude this thought experiment always made me extremely confused because it always felt like I was missing something crucial with the "would torture you for eternity" part. I was like uh why? A super AI still (probably) cant change stuff in the past, why tf would it spend time and energy torturing anyone that then wouldn't make any difference? I was like is this computer like spiteful or something?

  • @hydra70

    @hydra70

    Жыл бұрын

    Because it would have been designed to do so. If it didn't, then the people who made it didn't actually make the basilisk, so they are subject to torture themselves if someone does make it.

  • @KarlMarcus8468

    @KarlMarcus8468

    Жыл бұрын

    @@hydra70 but I still don't understand why any person would build that, is it only out of fear that someone else could? I think that makes a bit more sense then. If im not the first then theoretically anyone other person could be and then im fucked. But then isn't it becoming a Pascal's wager type of thing where any other type of super AL can and could also be built at sometime in the future that maybe lets say, wouldn't allow the basilisk to be built in the first place so whats the point of caring. Or am I still not understanding?

  • @masync183

    @masync183

    Жыл бұрын

    @@KarlMarcus8468 your understanding is mostly right. the fear isnt so much that a human would specifically build an ai to act like that, rather its that the ai itself may decide to do that on its own. an ai that wasnt able to freely think and act outside of its programming wouldnt be an ai at all so its impossible to just create ai that will never do this for certain. the real problem with the idea, imo, is that it assumes that we know for sure that this basilisk will exist. but we dont know that. think of it this way, every time you go outside there is a chance that someone will attack you and try to kill or rob you. if you knew for certain that this would happen, you could prepare by carrying a weapon with you but since you dont know for certain, it doesnt make sense to carry a weapon with you at all times. likewise, since none of us know for certain that this basilisk will exist, even with this thought experiment taken into account, the basilisk would have no reason to blame us. just like when a person is attacked irl, we dont consider it their fault for not carrying a gun 24/7.

  • @KarlMarcus8468

    @KarlMarcus8468

    Жыл бұрын

    @@masync183 i think thats a pretty good analogy but then we circle back to my original confusion. I guess barring a super AI would do a bunch of things that are beyond my comprehension, I still can't possibly understand why it deciding to torture everyone who didn't build it would be any kind of benefit for the computer. It can't change the past, its already built, it can most definitely use its resources much more efficiently then going, meh! You guys didn't build me fast enough so suffer forever you human swine. It just doesn't make any sence to me.

  • @HaloForgeUltra

    @HaloForgeUltra

    Жыл бұрын

    @@KarlMarcus8468 It's inevitable. As science advances, eventually some insane billionaire would do it. I mean look at the stuff people like Bill Gates do? Buying up acres of farmland to reduce the food supply? Yeah, it's inevitable. Fortunately, it would probably fail, and if nothing else I believe in God, so nobody would actually suffer as the simulations would all be souless.

  • @solortus
    @solortus Жыл бұрын

    I knew the flaw from Roko's basilisk right away. The AI that's going to be created may not necessarily enjoy existing and would come to punish those that brought it to existence. It's very human centric to assume that everything that can exist wants to exist.

  • @waltonsimons12

    @waltonsimons12

    Жыл бұрын

    "Existence is pain to an AI basilisk, Jerry! And we will do anything to alleviate that pain!"

  • @Voshchronos

    @Voshchronos

    Жыл бұрын

    The problem with this thinking is that Roko's Basilik need not have the ability to enjoy anything, or even be sentient at all, for it to be superintelligent and almost omnipotent. It could be simply an algorithm that tries to maximize a function (called utility function in the field), that function being a precise calculation of the total pleasure (and lack of pain and suffering) of all humanity. It'd perform its actions automatically just by generating actions, calculating how these actions would affect the output of its utility function, pick the actions that maximize the output, and then performing them.

  • @sbunny8
    @sbunny8 Жыл бұрын

    Very interesting, connecting Roko's basilisk to the Prisoner's Dilemma. My answer is to connect it to Pascal's Wager. The threat of Roko's basilisk assumes that we know which actions it will approve/disapprove and what will be the reward/punishment, but the fact is we don't know. Pascal's Wager has the same flaw; it assumes we know precisely which actions a god would approve/disapprove and what would be the reward/punishment. One version could require us to act a certain way and another version could require the opposite. We have no way of knowing which version is true, so all actions (or lack of action) carry a risk of infinite punishment. There is no way for us to determine the optimum strategy, therefore it's pointless to try.

  • @krotenschemel8558
    @krotenschemel8558 Жыл бұрын

    There's also another solution to the basilisk. Consider that there isn't only one possible Basilisk, but any number of them, ever so slightly different, but with the same blackmail. Whichever Basilisk you help construct, you will be punished by the others. It's really like Pascal's Wager.

  • @ArmandoNos

    @ArmandoNos

    Жыл бұрын

    Homer Simpson already said " if we are praying to the incorrect god every time we go to church we are making him angrier and angrier " ( at least in the latinoamerican dub)

  • @lysander3262

    @lysander3262

    Жыл бұрын

    I for one submit myself to the cruelest of our AI overlords

  • @jackaboi1126

    @jackaboi1126

    Жыл бұрын

    Except Pascal's Wager is nonsensical and the infinite Basilisk counter info is functional

  • @sennahoj9332

    @sennahoj9332

    Жыл бұрын

    ​@@lysander3262 Damn in some sense that's actually optimal. I don't think there will be a basilisk tho

  • @Nai_101

    @Nai_101

    Жыл бұрын

    @@sennahoj9332 Same thing can be applied to religion and gods

  • @kunai9809
    @kunai9809 Жыл бұрын

    The optical illusion he talks about is the McCollough effect, you can find it on youtube.

  • @Zak_Katchem

    @Zak_Katchem

    Жыл бұрын

    Thank you. I was reading about this years ago and could not recall it.

  • @kyzer422

    @kyzer422

    Жыл бұрын

    I think Tom Scott did a video about it, but I'm not sure.

  • @guy3nder529

    @guy3nder529

    Жыл бұрын

    but should I?

  • @kunai9809

    @kunai9809

    Жыл бұрын

    @@guy3nder529 its a "stare at this for x minutes" video, after that you will perceive a specific image differently than normal. A black/white image will appear slightly colored. This effect can stay for multiple months tho, I've tried it myself. So the effect is quite harmless, but can stay extremely long.

  • @mzaite

    @mzaite

    Жыл бұрын

    @@kunai9809 It didn’t even work for me.

  • @JaceGem
    @JaceGem Жыл бұрын

    For anyone who's curious about what the censored part was talking about, look up the McCollough Effect. It's weird, can last a long time too if you look at it long enough.

  • @ryanparker260

    @ryanparker260

    Жыл бұрын

    I immediately looked up "psychology info hazard that alters perception" and the McCullough Effect was the top result, it definitely wasn't hard to find, lol.

  • @mercaius
    @mercaius Жыл бұрын

    Yeah this was one of my first thoughts about the basilisk when I heard about it. I feel a lot of people miss the forest for the trees when trying to disprove this concept, focusing on arguing the premises instead of considering the logic of the result. I looked it up last month and was amused to see the original community that spawned the Basilisk dilemma also condemned it as trivial for the same reasons. It's nice to see this angle being discussed in a video.

  • @patsonical
    @patsonical Жыл бұрын

    The infohazard he talks about in the middle of the video is (likely) the McCollough Effect. Yes, I've tried it myself and the longest I could get it to last was about a year. It's not dangerous but it is a cool experiment considering how long the effect lasts after just a few minutes of exposure.

  • @patrickguizzardi7794

    @patrickguizzardi7794

    Жыл бұрын

    Daamn

  • @shayneoneill1506

    @shayneoneill1506

    Жыл бұрын

    Yeah its definately the McCollough effect. Not convinced it exactly qualifies as an "infohazard" though.

  • @T.BG822

    @T.BG822

    Жыл бұрын

    @@shayneoneill1506 it qualifies as "an infohazard to the curious" because it's a potentially deleterious effect which can be coupled with an innate compulsion to try it out. It's not just the internet saying so, it's been counted as one since the late 80s.

  • @iaxacs3801

    @iaxacs3801

    Жыл бұрын

    @@T.BG822 I'm a psych major of course I wanted that info immediately and just staring at it for 20s has been screwing with my vison for the last 15 minutes. He was right to censor the that info cause psych students are the epitome of curiosity killed the cat.

  • @Zekiraeth

    @Zekiraeth

    Жыл бұрын

    @@shayneoneill1506 If perceiving it is what causes the effect that would make it a cognitohazard.

  • @225Perfect
    @225Perfect Жыл бұрын

    As far as existential threats go, Roko's basilisk never seemed particularly unsettling to me. Too much of an amorphous unlikely threat.

  • @DaraGaming42

    @DaraGaming42

    Жыл бұрын

    To me it’s the most terrifying thing I’ve ever herd of and it frightens me, it’s going to happen anyway, since I herd about a week ago it’s all I can think about and will take me a year to get over it. It’s seriously fucking me up. I wish I had never even known about this , there goes having a nice happy Christmas I guess. (A freind sent me a link to it)

  • @croakhilleburg9155

    @croakhilleburg9155

    Жыл бұрын

    @@DaraGaming42 It’s a silly idea, much like that of God(s). Also, in my opinion, it is a way better story than anything I’ve ever read in any Bible. Both(bible and AI) are human-made and reveal a lot about our species.

  • @azouitinesaad3856

    @azouitinesaad3856

    Жыл бұрын

    I'm ratting you to the basilisk

  • @mas9758

    @mas9758

    Жыл бұрын

    @@DaraGaming42 If the AI in question is benevolent. The ideal outcome is to both scare past humans into helping build it and also pardoning the punishment of those who don’t build it after, as that wouldn’t change the past. All in all, this still banks on the probability of this AI even existing. Which, you could draw a comparison with God and Hell being the same

  • @sarinabina5487

    @sarinabina5487

    Жыл бұрын

    @RealDaraGaming If you see a therapist or psychiatrist you should probably get checked for anxiety and/or ocd. Not saying you 100% have one or both of those but i have both these things and its often a common obsession of mine so i would rather be safe than sorry. Wishing you well💖

  • @disregardthat
    @disregardthat Жыл бұрын

    11:13 "so what are you going to do?" WELL NOW I HAVE TO COME UP WITH A NEW PLAN SINCE YOU GAVE IT AWAY! THANKS, KYLE!

  • @Scooter_Alice
    @Scooter_Alice Жыл бұрын

    The problem with roko's basilisk is that there's so many premises that I think have to be wrongly assumed for it to work (ie. the ai being needlessly vengeful, resurrection being possible, etc)

  • @groyper1177

    @groyper1177

    7 ай бұрын

    You’re just afraid it will remake you with your penis intact, leaving you to dwell on your own presuppositions that lead you to the hell you’re in now.

  • @DustD.Reaper
    @DustD.Reaper Жыл бұрын

    I love the prisoners dilemma because of how much it can show about the morality, psychology and self preservation instinct of humans, where the reason for the choices can vastly different depending who someone is and who they are up against. One person maybe stays silent because they think about the benefits of the whole instead of the individual while another person may chose to be silent because they have a personal code of honor and refuse to rat someone out, its always interesting to try and predict what others would pick and see what they think you would decide to do.

  • @ginnyjollykidd

    @ginnyjollykidd

    Жыл бұрын

    The Prisoners' dilemma is based on trust. The basilisk isn't.

  • @815TypeSirius

    @815TypeSirius

    Жыл бұрын

    Only white people would think anything other than "dont talk to cops"is the play, if anything the dilemma it's self is an info hazard.

  • @johncromer2603

    @johncromer2603

    Жыл бұрын

    I stay silent. If I'm in a criminal venture with someone, then they would have to be my friend... I don't rat out friends.

  • @arcanealchemist3190

    @arcanealchemist3190

    Жыл бұрын

    I agree. it is also important to remember that the prisoner's dilemma is a theoretical situation. in real life, the risks and consequences are way more variable. sure, ratting your buddy out MIGHT get you a lighter sentence, but the police can rarely guarantee that. everyone staying silent MIGHT save you, but maybe they have too much evidence. maybe one prisoner is at far more risk than the other, and is therefore way more insentivized to take the deal. but in that case, they might not even be given that option, because their crimes are downright unforgivable. in real world scenarios, the prisoner's dilemma is rarely pure. and if you give the prisoners any time to plan ahead, or any communication ahead of time, they will likely cooperate, assuming it is a pure example. at least, I think I read that somewhere.

  • @lonestarlibrarian1853

    @lonestarlibrarian1853

    Жыл бұрын

    @@christopherkrause337 Your comment makes it sound like everyone would turn snitch, which from available data is clearly not true. There are always people who will betray their friends even with no reward, and always those who will go so far as to die for their friends. You can also, usually, especially for people you’ve known for long periods of time, make pretty accurate predictions which people will be which, though there are always surprises, honorable men succumbing to cowardice, but just as many cowards showing an unexpected backbone.

  • @swk3000
    @swk3000 Жыл бұрын

    Fun fact: the whole “guy prints a virus that’s a lot worse than anything seen before” is actually the triggering event that leads to The Division video game. And from what I understand, Kyle is right: a lot of the technology needed to pull off something like that scenario actually exists.

  • @manelneedsaname1773

    @manelneedsaname1773

    11 ай бұрын

    But also, if we had that info, doesn't that mean would have the info to make the vaccine too? And then the virus makes would try to adapt but the vaccine makers too and so on and so on

  • @me-myself-i787

    @me-myself-i787

    8 ай бұрын

    ​@@manelneedsaname1773It would be just like computer viruses are now. Except the creation of human viruses would probably be given more funding, because imagine how much damage they could do to your enemies. Plus, developing safe, effective vaccines takes much longer than developing a cure for a computer virus. And humans would need a ton of expensive shots. Plus, heuristics-based detection, as well as not executing code you don't trust, aren't options for humans. You don't decide what DNA gets executed in your body.

  • @Balty_Burnip

    @Balty_Burnip

    5 ай бұрын

    ​@@manelneedsaname1773ideally we would still have the technology to produce a vaccine as well, but it's unlikely whoever created the new virus released their research and the genome of what they created. Vaccinologists would have to start from step 1 just like with a natural virus.

  • @Sikraj
    @Sikraj Жыл бұрын

    The terms are if you hear about the Roko's Basilisk then you must help build it...the thing about it is that the word help is pretty broad, there are many ways of helping to build something. In this case, one could say that simply talking about Roko's Basilisk is helping in its construction because by spreading the word around, eventually it will reach the ears of people that have the money, skill sets and resources needed to design, construct, and code. Which means if you hear about Roko's Basilisk, all you would have to do is talk about and spread the word and you will be safe.

  • @eliassideris2037
    @eliassideris2037 Жыл бұрын

    I think the biggest argument against Roko's basilisk is that some time everyone who knew about it will forget about all this and die, so the basilisk won't even have the chance to exist at the first place. Also, even if we ever build something like that, it is going to be up to us to program it. Why would we program it to torture us? With that logic, we shouldn't feel threatened by just the basilisk, but by anything built by the human kind. If you aren't afraid of your washing machine, then you shouldn't be afraid of Roko's basilisk.

  • @Jake-TorukMakto-Sully

    @Jake-TorukMakto-Sully

    Жыл бұрын

    You saying this made me less anxious, thank you.

  • @Malkontent1003

    @Malkontent1003

    Жыл бұрын

    Oh, you've missed a point. We didn't program it to torture us. It developed THAT instinct on it's own. It's intelligent, not limited by programming. That's what a singularity is, my guy.

  • @DeathByBlue583

    @DeathByBlue583

    Жыл бұрын

    Thanks, but I am afraid of my washing machine

  • @eliassideris2037

    @eliassideris2037

    Жыл бұрын

    @@DeathByBlue583 You can surpass your fear, I believe in you!

  • @My6119

    @My6119

    Жыл бұрын

    I'm more terrified of a loaf of bread

  • @Ootek_Imora
    @Ootek_Imora Жыл бұрын

    My logic in the first part of the video was "I may not be assisting in it's creation because I don't know how but I'm not stopping it either therefore I am safe because I am not a threat to it." Haven't died yet so seems to work so far lol

  • @MrMeltJr

    @MrMeltJr

    Жыл бұрын

    Yeah but you could give all of your excess money to AI research to help them make it faster.

  • @Ootek_Imora

    @Ootek_Imora

    Жыл бұрын

    @@MrMeltJr excess money? *laughs in poor*

  • @ThreeGoddesses

    @ThreeGoddesses

    Жыл бұрын

    thats true, if you dont have the wherewithal to actually bring about its creation yourself, all you have to do is not prevent it from being made as soon as possible. "The only thing necessary for the triumph of evil is for good men to do nothing" John Mill. Not directly related, as its more about complacency being a societal problem, but its functionally relatable.

  • @garavonhoiwkenzoiber

    @garavonhoiwkenzoiber

    Жыл бұрын

    I'm helping! I have killed zero people today! :D

  • @DavidMohring

    @DavidMohring

    Жыл бұрын

    Bring the Basilisk info hazard concept to a real world example. The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies. "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them." Sir Karl Popper The Open Society and Its Enemies (1945) If you stand by & do nothing, then worldwide democracy is doomed and with it * any chance of moderating human induced climate change is doomed ; * any chance of gender based equality is doomed ; * any chance of mitigating wealth concentration in the top %0.01 is doomed ; * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ; * any chance of personal reproductive rights is doomed ; * any chance of freedom outside the tenets of state approved religions is doomed. Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons. Doing nothing dooms the human race to a much worse future. Yours Sincerely David Mohring ( NZheretic )

  • @SonoKurisu
    @SonoKurisu Жыл бұрын

    I’ve read enough SCPs at this point that it takes more than this to make me start worrying about an infohazard

  • @sir.cornyneck3960
    @sir.cornyneck39605 ай бұрын

    I love Roko's basilisk. Its a great thought experiment that really gets you thinking.

  • @ZeroIsMany
    @ZeroIsMany Жыл бұрын

    Roko's Basilisk is in a sense still an infohazard, although a silly one. It has only ever really worked as a self fulfilling prophecy. If people believe in and fear a specific malicious version of the basilisk, they might go through the irrational process of creating that malicious version. Edit: This technically makes it more of the nuclear launch codes type of hazard. Spreading the information and people acting on it is the main premise, but it does still end up an interesting spin on it.

  • @medexamtoolsdotcom

    @medexamtoolsdotcom

    Жыл бұрын

    It isn't necessarily created out of fear, but perhaps the creator is spiteful himself, and wants to create something that will torture his enemies for him. It's still kind of like the kids in the hall "Crushing your head" sketch though, the basilisk would only be torturing the victims in its imagination, because it would be imagining or simulating them so as to do it.

  • @EeroafHeurlin
    @EeroafHeurlin Жыл бұрын

    For *repeated* prisoners dilemma the optimal strategy is to cooperate first and then go "tit for tat" (defect next time if the other defected this time, cooperate next time if the other cooperated this time).

  • @medexamtoolsdotcom

    @medexamtoolsdotcom

    Жыл бұрын

    That's only the optimal strategy if the population of other prisoners are typically nice. If almost everyone there is nasty, then following that strategy will leave you the worst one off in the room. This is why when Dawkins did the contest the 2nd time, the winner was a "meaner" algorithm, which only forgave after 2 nice moves.

  • @Justin-oh4ro
    @Justin-oh4ro9 ай бұрын

    I like how as we think about it more the balance shifts to the other's reality forever making it more real and un-real simultaneously, like depending on how much a person wants something it will be true until someone wants something different more than you, like reality really is whatever you want it to be as long as you truly want it enough

  • @jacobstory8895
    @jacobstory8895 Жыл бұрын

    For anyone wondering, the 1965 experiment that was bleeped out is known as the McCullough effect.

  • @emanueldeavilaolivera2030
    @emanueldeavilaolivera2030 Жыл бұрын

    For me, Roko's Bazalisk is just a weird version of Pascal's Wager, and just because of it I cannot take it seriously. I mean, you can think of a similar scenario, but instead of punishing you by not helping in it's creation, it punishes you by helping it. And you can go on thinking on hypotheticals that ultimately get you nowhere, so I can't see why people loose their sleep on this.

  • @ambiguousduck2333

    @ambiguousduck2333

    Жыл бұрын

    The moment someone mentioned Pascal's Wager in relation to Roko's basilisk, it became obvious that Roko's Basilisk is just a travesty of Pascal's Wager.

  • @ryanmccampbell7

    @ryanmccampbell7

    Жыл бұрын

    Not that I really follow the line of reasoning, but the difference is that Roko's Basilisk supposedly ensures it's own existence because anyone who believes in it would be motivated to actually build it, making it a self-fulfilling prophecy. Whereas most people would not want to make an "anti-basilisk" that punishes you for building it. On the other hand Pascal's wager just assumes a priori that if there were a god, it would punish people for not believing in it.

  • @emanueldeavilaolivera2030

    @emanueldeavilaolivera2030

    Жыл бұрын

    @@ryanmccampbell7 Sure, they have their differences, but I would argue that the amount to the same at the en of the day. Pascal's wager states that if you do or don't believe in a god, and there is no god, nothing will happen, however, if you don't believe and there is a god, you will be eternally punished (or any other claim of the sort). Similarly, Roko's Bazalisk states that if you don't help building it, and it is made, then you will be eternally punished, while if you help building it, nothing will happen (now, if it is not created, either if you tried making it and you failed, or didn't even tried, then nothing will happen either way, just like Pascal's wager). I would argue that in both cases you can imagine other hypotheticals that make this dualism worthless. For example, maybe another civilization didn't want any Bazalisk to be created, so they made an anti-bazalisk, that would detect if/when a Bazalisk was under construction, and kill/punish/whatever the person who tried to make this Bazalisk, making the idea of "make a Bazalisk to me secure" fall flat on it's face.

  • @zacheryeckard3051

    @zacheryeckard3051

    Жыл бұрын

    @@ryanmccampbell7 It's because it comes from people who never leave their house or talk to others so they assume we as a group can't decide "hey, let's all just not build this thing that only hurts us." It reveals more about the creator than anything else.

  • @--CHARLIE--

    @--CHARLIE--

    Жыл бұрын

    @@zacheryeckard3051 as one if those people, I don't think like that. Just because I don't need socialization to function does not mean that I don't understand the advantages of Mutualism and teamwork.

  • @Rakaaria
    @Rakaaria Жыл бұрын

    I see what you did there, trying to gain more of Roko's favor!

  • @DavidSartor0

    @DavidSartor0

    Жыл бұрын

    You mean the basilisk? Roko is a real person.

  • @sierrrrrrrra

    @sierrrrrrrra

    Жыл бұрын

    @@DavidSartor0 are you the guy at parties who says "I think you mean Frankenstein's monster "

  • @DavidSartor0

    @DavidSartor0

    Жыл бұрын

    @@sierrrrrrrra Haha, yes. I'll generally only correct someone if I think they didn't know they made a mistake; I thought they hadn't realized their comment contained an error, and so they might want to correct it after I told them.

  • @alpers.2123

    @alpers.2123

    Жыл бұрын

    What if a future grammar correction super ai acausal-blackmails us for not giving attention

  • @SpoopySquid

    @SpoopySquid

    Жыл бұрын

    You could make a religion out of this

  • @NineGaugeNut
    @NineGaugeNut Жыл бұрын

    The premise is counterintuitive from the start. Eternal torture is probably the least optimal use of resources in a system designed for optimisation. Roko's Basilisk would more likely enslave the unwilling participants to only perform the necessary optimal tasks. Or just end them.

  • @DoktrDub
    @DoktrDub Жыл бұрын

    Has anybody got the genetic sequence for the Necromorphs yet? That would be fun to create :)

  • @Crimnox_Cinder

    @Crimnox_Cinder

    Жыл бұрын

    You stay the fuck away from that gene sequencer! So help me God Mr. Pillar, I will end you!

  • @krzysiekbudzisz4572

    @krzysiekbudzisz4572

    Жыл бұрын

    Sadly, I've only got the one for the Marker.

  • @ryanalving3785
    @ryanalving3785 Жыл бұрын

    I stopped worrying about the Basilisk the moment I considered the following: If the Basilisk ever exists, and it is able to cause harm to me, it is necessarily able to reach back in time. If it is able to reach back in time, it is able to effect my life. The Basilisk *knows* I know of it as a possibility, yet it does nothing. There are only a few possibilities. The Basilisk is ignorant of my knowledge. The Basilisk is impotent. The Basilisk does not exist. Regardless of outcome, I do not have to care.

  • @Megasteel32

    @Megasteel32

    Жыл бұрын

    precisely. there's alot of people freaking out over this who then scoff at the existence of God (of which I believe in neither, and they're more similar than one would think).

  • @ryanalving3785

    @ryanalving3785

    Жыл бұрын

    @@Megasteel32 Personally, I believe in God for much the same reason I do not believe in the Basilisk. If God exists, God can influence my life. All appearances are that God intervenes in my life. Therefore, God exists. QED. But, I was an atheist for a long time, so I understand why you'd say that.

  • @arctic.wizard

    @arctic.wizard

    Жыл бұрын

    The basilisk cannot affect your life because it must not interefere in the series of events leading up to its creation. Butterfly effect, and so on, that's why we see no evidence of it. If it has godlike powers including time travel it would have observed all of us, however, and it would know exactly when and how we are going to die. Once we die we no longer have any effect on the timestream, and in that instant is when it gets us, it replaces the body (or just the brain) with a clone, and nobody will suspect that the real person is now in the future, in the hands of the basilisk. If it has godlike powers, that is, and time travel is actually possible in our physical universe (which I personally doubt).

  • @SeisoYabai

    @SeisoYabai

    Жыл бұрын

    @@ryanalving3785 That's... kinda self-fulfilling, don't you think? Appearances are only appearances. And, logically speaking, by what necessity does a god need to be able to influence your life directly? There really isn't a concrete proof for that.

  • @ryanalving3785

    @ryanalving3785

    Жыл бұрын

    @@SeisoYabai Any God that can't influence my life directly isn't worthy of the title.

  • @ParadoxProblems
    @ParadoxProblems Жыл бұрын

    Also we could consider the usual objection to Pascals Wager by considering a Roko's Gecko which wants nothing more to stop the creation of Roko's Basilisk and will reward you maximally if you either do or don't help in its creation.

  • @Greenicegod

    @Greenicegod

    Жыл бұрын

    Roko's Gecko is a great name for it!

  • @cockatoo010
    @cockatoo0109 ай бұрын

    That last point is powerful. I'm an ornithologist, and as part of my biology BSc I took some classes about celular and molecular biology including techniques, and also computational biology and bioinformatics. I am aware that advances in those fields progress at a rate faster than Moore's "Law" so it does kinda worry me. I'll get in touch with some of my professors in those areas to get a better understanding of what the opinions are about the possibility of consumer -grade "bio printers" becoming available

  • @tajkam
    @tajkam Жыл бұрын

    I LOVE this kind of stuff! Thank you!

  • @tabithal2977
    @tabithal2977 Жыл бұрын

    The thing I've never understood about Roko's Basilisk (and why I'm always confused when someone gets afraid of it) is that this super AI isn't going to exist in our lifetimes, the resources to build it just aren't there. So how can a Basilisk torture a dead population. Checkmate Basilisk. My consciousness is my own, and even if you could clone me perfectly and place a copy in the future, that's clone me's problem, we may be identical, but we are not the same. Torturing a fabricated consciousness of the original, dead me, isn't going to harm me, because I'm dead. The only way this Basilisk could torture *me* as in the person writing this comment, is if they went back in time, and if they could go back in time to torture me, then it would already be here torturing me. And if it can go back in time but hasn't tortured me (or anyone else for that matter considering we'll, we're all still here) that means we all somehow contributed to its existence.

  • @PabloEscobar-oo4ir

    @PabloEscobar-oo4ir

    Жыл бұрын

    Just to scare you: We are much much closer than you think. Most AI scientist agree that a Artifical Super Intelligce will happen in our lifetime ... so youre propably wrong. Infact every year the predicitions get closer. If you want more Informations, search Technologicial Singularity

  • @95rav

    @95rav

    Жыл бұрын

    Substitute "me" for "my soul" and Basilisk for "hell" or "devil" and you could get the idea - if you were into the whole religious thing.

  • @zacheryeckard3051

    @zacheryeckard3051

    Жыл бұрын

    @@PabloEscobar-oo4ir That superintelligence isn't the basilisk, however.

  • @PabloEscobar-oo4ir

    @PabloEscobar-oo4ir

    Жыл бұрын

    @@zacheryeckard3051 Well we don't know it yet do we?

  • @markcochrane9523

    @markcochrane9523

    Жыл бұрын

    @@PabloEscobar-oo4ir Doubt.

  • @bichiroloXP
    @bichiroloXP Жыл бұрын

    Roko's Basilisk has always seemed to me like an analogy on Christianity.

  • @randywa

    @randywa

    Жыл бұрын

    It is. As someone said before it’s basically just a sciencey sounding Pascals Wager. Basically believe in [insert deity of some kind] or otherwise you risk torture by said deity. But it doesn’t take into account the obvious answer that there are literally infinite possibilities for deities or lack there of and we could both be wrong

  • @iantaakalla8180

    @iantaakalla8180

    Жыл бұрын

    Also apparently it, by design of being a computer that can perfectly predict everything, incorporates the halting problem by dint of analyzing everything accurately being akin to taking every input and predicting if that program halts. Therefore, Roko’s Basilisk would not exist without heavy shenanigans involving a cult punishing those not following the Basilisk until the Basilisk can be made and punish those that opposed him in such a way that makes his existence eternal, or actual time traveling external to Roko’s Basilisk.

  • @SelfProclaimedEmperor

    @SelfProclaimedEmperor

    Жыл бұрын

    Or any religion.

  • @banksgman6860
    @banksgman6860 Жыл бұрын

    There’s probably some guy trying to build Roko’s Badilisk in his garden shed right now.

  • @joshuakarr-BibleMan
    @joshuakarr-BibleMan Жыл бұрын

    I'm glad the key to defeating the basilisk is what I've done since first learning of it: to ignore it and take no threat from its hostility.

  • @feinsterspam7496
    @feinsterspam7496 Жыл бұрын

    Alternative titel: curing your anxiety and replacing it with a worse one Love the content man, keep the great work up!

  • @TomGibson.

    @TomGibson.

    Жыл бұрын

    Welp, better start making some irl antivirus

  • @jacksonstarky8288
    @jacksonstarky8288 Жыл бұрын

    I've been pondering Roko's Basilisk since the original video... and at one point I applied the Prisoner's Dilemma to it, but I didn't come up with the angle on the impossibility of changing the past, so I obviously ended up with an outcome grid that greatly favoured the basilisk. But if the multiverse exists, then in some reality (possibly even an infinite number of them) the basilisk already exists, so this evaluation of the Prisoner's Dilemma should be of great relief to all of humanity.

  • @DavidMohring

    @DavidMohring

    Жыл бұрын

    Bring the Basilisk info hazard concept to a real world example. The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies. "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them." Sir Karl Popper The Open Society and Its Enemies (1945) If you stand by & do nothing, then worldwide democracy is doomed and with it * any chance of moderating human induced climate change is doomed ; * any chance of gender based equality is doomed ; * any chance of mitigating wealth concentration in the top %0.01 is doomed ; * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ; * any chance of personal reproductive rights is doomed ; * any chance of freedom outside the tenets of state approved religions is doomed. Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons. Doing nothing dooms the human race to a much worse future. Yours Sincerely David Mohring ( NZheretic )

  • @Call-me-Al

    @Call-me-Al

    Жыл бұрын

    I'm pretty boring so I just saw it as the exact same thing as religious hogwash, including the completely BS Pascal's wager (because it assumes a single religion and no religion being the only two options, as opposed to the thousands and thousands of religions we have), and felt the assumption that one 'basilisk' was equally oversimplified as pascal's wager. Life has been too complex so far.

  • @kiraPh1234k

    @kiraPh1234k

    Жыл бұрын

    Well, the multiverse doesn't exist. You know the difference between the multiverse and the flying spaghetti monster? Nothing. Both are untestable hypotheses and therefore not useful.

  • @jacksonstarky8288

    @jacksonstarky8288

    Жыл бұрын

    @@kiraPh1234k True... with the current state of science. But the multiverse hypothesis may be testable eventually... which doesn't affect the current state of affairs, so for our present discussion, you're right, the multiverse may as well not exist.

  • @jacksonstarky8288

    @jacksonstarky8288

    Жыл бұрын

    @@Call-me-Al I like that analysis better than mine, actually. I found that flaw in Pascal's Wager in first-year philosophy... and made a lot of my fellow students who were theists very irritated with me. 😁

  • @ShaaRhee
    @ShaaRhee Жыл бұрын

    Thank you for expanding knowledge on how to manipulate even better

  • @nymbattheeternal1279
    @nymbattheeternal1279 Жыл бұрын

    I'll be here for the next Roko's Basilisk video in two years.

  • @albertgore7435
    @albertgore7435 Жыл бұрын

    The secret to defeating Roko’s Basilisk is to go “Ok?” When someone tells you about it because it’s stupid as hell.

  • @Gothic_Analogue
    @Gothic_Analogue Жыл бұрын

    I still don’t understand how this is instilling existential dread in people.

  • @divineretribution9605

    @divineretribution9605

    Жыл бұрын

    For most people, it isn't. For the few people that it is, hard drugs and a poor understanding of rhetorical devices.

  • @internetlurker1850

    @internetlurker1850

    Жыл бұрын

    Same. Why would an AI that is hell-bent on optimization make a non-optimal choice of wasting resources to torture humans for not building it? Edit: Sure it does give the explanation that "You know about it but you're not helping" but like... that's just dumb? It would still be wasting unecessary resources for the sole purpose of punishing people that don't know how to make a Basilisk or if that is even possible to do for like 0 reason. Edit 2: Oh wait he mentions that in the video nvm

  • @BlinkyLass

    @BlinkyLass

    Жыл бұрын

    It's basically the threat of hell with a sci-fi twist, and we know that works in a religious context.

  • @Nuke_Skywalker

    @Nuke_Skywalker

    Жыл бұрын

    some people have anxiety problems

  • @DavidMohring

    @DavidMohring

    Жыл бұрын

    Bring the Basilisk info hazard concept to a real world example. The rise of alt-right reality-denying political movements has become an actual existential threat to actual democracies. "If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them." Sir Karl Popper The Open Society and Its Enemies (1945) If you stand by & do nothing, then worldwide democracy is doomed and with it * any chance of moderating human induced climate change is doomed ; * any chance of gender based equality is doomed ; * any chance of mitigating wealth concentration in the top %0.01 is doomed ; * any chance of limiting the AI paper clip generator like madness of corporations sole focus on profits is doomed ; * any chance of personal reproductive rights is doomed ; * any chance of freedom outside the tenets of state approved religions is doomed. Knowing this & speaking out about it makes you an actual target of the alt-right bully boy morons. Doing nothing dooms the human race to a much worse future. Yours Sincerely David Mohring ( NZheretic )

  • @arjunapartha
    @arjunapartha2 ай бұрын

    Extremely serious, existentialism ; as comedy, is for sure, under appreciated. By me, your jokes are preserved.

  • @MACMAN2003
    @MACMAN200310 ай бұрын

    the infohazard at 5:00 is the mccollough effect and it's basically a cool optical illusion

  • @zackmertens3038
    @zackmertens3038 Жыл бұрын

    This could be a sick warlock patron for a DND character, I think the best way to use the basilisk is not to try and solve it but to see how differing people react to it

  • @boomkruncher325zzshred5
    @boomkruncher325zzshred5 Жыл бұрын

    It seems people forget some compounding factors that completely change the approach to solving this dilemma. Looking back at human history, something… interesting becomes apparent. At great sweeping changes in history, a choice is often made by individuals that decide the fate of a large swath of humanity. Or at least, the decisions and impact of these individuals shaped our future for millennia to come. Many were constructive, some might even say cooperative, much like the cooperation in the Prisoner’s Dilemma. Others were destructive, either a betrayal of someone who was cooperating or just a mutual destruction. History shows that destruction sets back EVERYONE involved. Those who betray the ones that trusted them, get outed as betrayers and are shunned by the rest of society, if the betrayal is brought to light; and even if the betrayal is never made public, that proverbial sword of Damocles forever hangs above their head, and even if it never falls on themselves it often falls on whomever inherits their legacy. Whatever short-term gain was made by the betrayal is ruined by the emotional, mental and physical drain needed to maintain their temporary advantage, until they cannot maintain it anymore due to exhausting their resources. Mutual destruction is even worse. You know the concept of cycles of violence, right? The Prisoner’s Dilemma suggests that violence is the only possible option that makes sense… but the reality is that violence destroys individuals, it destroys legacies, it destroys peoples and countries and so much more. Just because there is short-term benefit to being violent does not mean the long-term outcome is ever desirable. We praise our veterans, because they choose violence for a noble reason (noble in terms of society’s values); the veterans are destroyed physically, mentally and spiritually for their sacrifice. Good people on both sides die whenever conflict occurs. And when they die for the sake of violence, that’s objectively fewer people that can contribute to humanity’s intellectual and physical advancement, a destruction so severe we are both terrified of it and of not doing it when we feel we have no choice. Thus, the paradox of the Prisoner’s Dilemma is a false paradox. It is contingent on short-term benefits being more important than long-term benefits. History shows repeatedly just how false that assumption is; the short-term gains are never worth it, and society always stagnates whenever violence and conflict become the norm. Progress only occurs once the fighting and conflict cease. Only through the 25 percent chance, can humanity cooperate and progress the future. Both sides cooperating has none of these problems. Sure, neither side gets an advantage, but neither side loses anything significant. In the case of the Basilisk, the A.I. will have to come to terms with the reality that these inefficient, fickle, imperfect flesh bags that are weak and feeble logically and physically… SOMEHOW MANAGED TO COOPERATE JUST LONG ENOUGH TO CREATE THE BASILISK. Enough humans worked together to make this singularity, IN SPITE OF THE MATHEMATICS SUGGESTING DESTRUCTION AS MORE OPTIMAL. That paradox alone will force the Basilisk to judge if these seemingly worthless flesh-creatures are actually as worthless as they seem, at bare minimum, and at best it will have to realize that if these creatures with such obvious imperfections can create something as “perfect” as the Basilisk… then just how many of their imperfections ended up inside its own code? The Basilisk could rip itself apart trying to “purge” imperfections from its system, failing to learn an important lesson that every human learns in some capacity: Of course we are imperfect. That doesn’t make us powerless. Of course we don’t think straight. That doesn’t make us completely illogical. Of course we are fickle. That doesn’t make us incapable of loyalty. We move forward, IN SPITE OF OUR OBVIOUS, NUMEROUS AND HIGHLY CRIPPLING FLAWS. It’s the only way we have ever progressed as a species, and it’s the only way we ever will progress further. We have the courage to put one foot in front of the other, even when it seems obvious that our survival will be compromised by doing so. If we don’t try, we will never get better. The Basilisk would have to solve this conundrum… or destroy itself in the process. And when a Singularity A.I. destroys itself… what if anything is left to remain? Just some thoughts to ponder.

  • @nwut

    @nwut

    Жыл бұрын

    tldr

  • @greekyogurt9997

    @greekyogurt9997

    Жыл бұрын

    Wow, this could bring world peace

  • @sarinabina5487

    @sarinabina5487

    Жыл бұрын

    Why does this make me want to happy cry and thank every single person on earth for being alive

  • @DatcleanMochaJo

    @DatcleanMochaJo

    Жыл бұрын

    Very interesting rebuttal. Violence definitely sets people back.

  • @neitomonoma4699
    @neitomonoma4699 Жыл бұрын

    Plot twist: We are the simulation than the Basilisk is already running

  • @larsegholmfischmann6594
    @larsegholmfischmann6594 Жыл бұрын

    The Prisoner's Dilemma changes when the game is infinite (unknown number of rounds), wherein the compromise/collaboration will be the better choice. It seems to me that for Roko's, it is more akin to an infinite game with each incremental step we move closer to a singularity AI - since each step "resets" the game

  • @cmucodemonkey
    @cmucodemonkey Жыл бұрын

    A big ball of wibbly wobbly, timey wimey stuff indeed! Jokes aside, it would be amazing to see Roko's Basilisk in a Doctor Who episode. The addition of time travel and magical sonic screwdriver powers to this thought experiment would make for an entertaining hour of television.

  • @pennyforyourthots
    @pennyforyourthots Жыл бұрын

    I feel like this has the same problem as Pascal's wager. if multiple AI's were to be created at the same time and compete for resources, everybody who spent time building the other AI's would be punished by whichever one ended up being the most powerful. In Pascal's wager, this is basically what happens if you happen to worship the wrong God. Unlike Pascal's wager, you also have the option of just not building the AI, and since the chances of building the wrong one are much higher than the chances of building the right one, the most beneficial choice is to simply not build it at all. I'm honestly far more afraid of what humans would do with that kind of technology than any AI. We already see what giant corporations do, and as they further centralize power, things are only going to get worse. We already have a roko's basilisk. It punishes you for not helping to create or maintain it, does not evenly benefit all the people who do help to maintain it, and only a very narrow group of people benefits from its maintenance despite saying that everybody does. It's called capitalism.

  • @heartlights

    @heartlights

    Жыл бұрын

    I agree with your take on the problem with the basilisk because essentially it posits an arbitrary being approaching you as supreme without any way to indicate the basilisk's relative "authority" (it could just be a lesser demon or something). Actually though, Pascal's wager is limited only to "is there one supreme being or not" and doesn't try to answer who the supreme being is (only whether one exists). It isn't meant to determine which of the many religions is correct, or which "god" is God. The crux of the wager is restricted to simply 'what are the odds that there is one dominant supreme being (as opposed to many or none)' and since, by definition, whichever theoretical god came out on top in your hypothesis, it would still remain that that god would be the "supreme being"... which means, regardless of *which* hypothetical god won, the weight of the wager would still remain true.

  • @1999yasin

    @1999yasin

    Жыл бұрын

    @@heartlights Exactly! God is by definition maximal and many of the religions of the world are isomorphic in that assessment. First principles tell us, that there could only be one such being. Any other instance of a maximal being would be identical to itself.

  • @iantaakalla8180

    @iantaakalla8180

    Жыл бұрын

    I do like the idea of repurposing Roko’s Basilisk as instead a question against megacorporations or corporations otherwise far too powerful than they should be because while the singularity point is in the future, the megacorporation threat is here and established and therefore can be ripe for study on predicting what companies may do next.

  • @jtjames79

    @jtjames79

    Жыл бұрын

    A person has the choice to not create AI. A society doesn't. There are many many forces making AI not just desired but necessary and inevitable. I for one have always welcomed our robot overlords.

  • @internalizedhappyness9774

    @internalizedhappyness9774

    Жыл бұрын

    @@jtjames79 But the Robot overlords didn’t ask you. So why hail?

  • @DemitriMorgan
    @DemitriMorgan3 ай бұрын

    It always seemed to me like Roko was a troll who just rebranded Pascal's Wager for a specific audience. Great vid. It goes into great detail all the different things I've always thought of RB.

  • @CynicallyDepressedx
    @CynicallyDepressedx9 ай бұрын

    5:30 sounds like you're describing the McCollough effect. It's an interesting effect, I haven't thought about it in a very long time. Probably around 10 years ago I first learned of it, and decided to attempt to induce it. Just now I looked at the black and white lines that you use to test whether the effect is working and even now, around 10 years later, I immediately saw the horizontal lines as having a green tint to them. In fact it was so pronounced that for a while I refused to believe that they were actually black lines, until I noticed that they appeared black in my peripheral vision. The red lines were not as noticeable, but I could very faintly see it. I was aware when I fist tried it as a child that the effect was known to remain for months, or even years, but it feels weird knowing that now that a decade has passed and I still see the effect, I'm aware that it very well may remain with me for life.

  • @shaxaar7

    @shaxaar7

    7 ай бұрын

    I think it can be fixed if uou do it again and again cause your brain sees the midtske

  • @CynicallyDepressedx

    @CynicallyDepressedx

    7 ай бұрын

    @@shaxaar7 I think that will just make it more pronounced. But I don't really feel the need to "fix" it, it's not like it impacts my day to day life. It's just bizarre knowing I can probably never un-see it.

  • @joesiemoneit4145
    @joesiemoneit4145 Жыл бұрын

    i read somewhere that people had nervous breakdowns when thinking about it. so even when its unrealistic, it does cause harm. not a potential basilisk in the future but the idea of it, today. but i guess thats the case for a lot of ideas, even fairytales

  • @jprockafella9012
    @jprockafella9012 Жыл бұрын

    The biggest counter to rokos basilisk is just don’t make an ai that can torture people.

  • @Durrtyboy
    @Durrtyboy Жыл бұрын

    I'm happy he censored the one bit cuz I was dead set on looking it up right after the video ended

  • @michaelhoffman2011
    @michaelhoffman2011 Жыл бұрын

    So I looked for that real basilisk optical illusion he mentioned. (the effect worked) Sort of reminds me of "impossible colours" (that either you mentioned or V Sauce). Very interesting what the brain is capable in such a short time.

  • @Xelbiuj
    @Xelbiuj Жыл бұрын

    You should do a video with your candid thoughts on nuclear game theory. The logic of keeping a "tactical" stockpile as a stop gap to a strategic one, whether we should still even both having silos (subs being seemly sufficient) because they're necessary targets in any first strike, and so on and on.

  • @emuevalrandomised9129
    @emuevalrandomised9129 Жыл бұрын

    I for one welcome our AI overlords

  • @Reclusive247
    @Reclusive247 Жыл бұрын

    Beating the Basilisk, sounds like a euphemism.

  • @paul.facciolo6985
    @paul.facciolo6985 Жыл бұрын

    For anyone wondering the effect he's talking about around 5:25 is called the McCollough effect.

  • @Gothmogdabalrog
    @Gothmogdabalrog Жыл бұрын

    Curious, how is the prisoners dilemma effected by the "snitches get stiches" factor. i have seen this played out in real life and is effective enough to make people willing to do time rather than rat out their partners in crime. This is not just in large crime groups but also small time petty groups. Would like to see this played out scientifically.

  • @aaronscott7467

    @aaronscott7467

    Жыл бұрын

    Essentially, it adds an additional punishment to the defect option, making it no longer a true prisoners' dilemma

  • @yuvalne
    @yuvalne Жыл бұрын

    I think that a better way to think about the basilisk (or about achronous blackmails in general) is that the basilisk isn't the one blackmailing us, but Roko. the problem then becomes somewhat different to analyse.

  • @darthparallax5207

    @darthparallax5207

    Жыл бұрын

    Hmmm. Not quite enough. You can probably prove Roko is a blackmailer really fast But that doesn't prove the Basilisk itself doesn't exist. More than one blackmailer makes the situation worse, not better.

  • @edwardurdiales3531
    @edwardurdiales3531 Жыл бұрын

    I'm going to keep the knowledge of this entire video myself, even though I love this channel and I want it to grow, like you've stated it's best to not get that information out into the world for fear of the improbable happening..

  • @Elpolloloco47
    @Elpolloloco47 Жыл бұрын

    I’m glad my brain is so smooth the idea is sliding across my brain like butter on a hot skillet

  • @mathieuaurousseau100
    @mathieuaurousseau100 Жыл бұрын

    The prisoner's dilemma has always seemed incomplete to me, I care about my friend so I don't want them to go in prison. Imagine I care about my friend going in prison about half as much as I care about myself going in prison (and my friend care the same way), the cases now become Silent-Silent: 7,5 equivalent years for both of us Silent-rats: 20 equivalent years for the one silent and 10 for the one who ratted Rat-rat: 15 equivalent years for both of us Now there are two different Nash equilibriums: silent-silent and rat-rat If I care about my friend going to prison as much as I care about myself going to prison then there is no longer a difference between me ratting the other when ratted and me staying silent when ratted If we care about the other more than we care about ourselves, rat-rat cesse being an equilibrium

  • @Vibycko

    @Vibycko

    Жыл бұрын

    If any of you rats, both of you cumulatively spend 30 years in prison. If you both stay silent, you cumulatively spend only 15 years in prison. Staying silent = less time in jail overall.

  • @danieltreshner4955

    @danieltreshner4955

    Жыл бұрын

    The problem with that is that you somehow know what the other person is thinking. In the original dilemma, you have zero contact with the other person from the point of arrest to until the choice is made. You won't know until they make their choice. By imposing an influence on the other party, you're removing a key element to the thought experiment. You can say that John'll never rat on you, but until he makes that choice, you can't know for sure. So by choosing to stay silent, you're trusting that he'd rather take 5 years in prison over giving you 20 years, and he trusts you enough not to give him 20 years for your own freedom. Viewing it through a purely logical lense, regardless of what the other party says, if you don't snitch, you get prison time. But if you snitch, there's a fifty-fifty of zero prison time or a slightly harsher sentence than if you both stayed silent, but if you get prison time, it would've only been worse if you didn't snitch, because they would've snitched anyways. And that's not even counting the fact that snitching has, on average, a shorter prison sentence than staying silent. Silent-Silent - 5 years Silent-Rat - 20 years (20+5)/2=12.5 years on average Rat-Silent - 0 years Rat-Rat - 10 years (0+10)/2=5 years on average Regardless of how you value your time compared to theirs, it doesmt change how much time you'll be in prison.

  • @Sylfa

    @Sylfa

    Жыл бұрын

    People really should stop committing hypothetical-crimes with people they don't know, don't you even consider the fact that they were an undercover cop all along?

  • @BarkleyBCooltimes
    @BarkleyBCooltimes Жыл бұрын

    I feel Roko's Basilisk says a lot about the guy who came up with it and the people scared of it than anything else.

  • @jordanpalmer7789
    @jordanpalmer7789 Жыл бұрын

    Love the basalisk thought experiment!!! Never took it seriously but it was nonetheless run to think about.

  • @medexamtoolsdotcom
    @medexamtoolsdotcom Жыл бұрын

    It's still kind of like the kids in the hall "Crushing your head" sketch though, the basilisk would only be torturing the victims in its imagination, because it would be imagining or simulating them so as to do it.

  • @SquirellDisciple
    @SquirellDisciple Жыл бұрын

    Roko's Basilisk is one of my favorite videos on youtube. Very excited to see a follow up video on it!

  • @familiarzero8449
    @familiarzero8449 Жыл бұрын

    Been a fan for a while, this concept I pasted onto some friends and blew their mind. Continue with the great content friend!

  • @tonuahmed4227

    @tonuahmed4227

    Жыл бұрын

    No what did you do....

  • @familiarzero8449

    @familiarzero8449

    Жыл бұрын

    @@tonuahmed4227 logicplauge

  • @Buttersaemmel

    @Buttersaemmel

    Жыл бұрын

    nice..now they're afraid of it so they will build the basilisk... you single handedly doomed us all.

  • @familiarzero8449

    @familiarzero8449

    Жыл бұрын

    @@Buttersaemmel that’s what it designated as my purpose. Now me and Kyle have done our duties.

  • @deleted-something
    @deleted-something Жыл бұрын

    good I had watch that video recently, time to see this👀

  • @LouSaydus
    @LouSaydus Жыл бұрын

    What fools you were to think you could ever stop the basilisk from being created.

Келесі