Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital
One of the most incredible talks I have seen in a long time. Geoffrey Hinton essentially tells the audience that the end of humanity is close. AI has become that significant. This is the godfather of AI stating this and sounding an alarm.
His conclusion: "Humanity is just a passing phase for evolutionary intelligence."
Recap here:
joetechnologist.com/2023/05/0...
With permission from MIT Technology Review’s EmTech Digital, May 3, 2023
Пікірлер: 2 300
40 minutes of an Englishman telling the world we are completely fucked in the most politest way possible.
@idkname
Жыл бұрын
why? how.
@joriankell1983
Жыл бұрын
@@idknamemany are falling for the theatrics, that's how.
@idkname
Жыл бұрын
@@joriankell1983 what is reality then?
@idkname
Жыл бұрын
@@joriankell1983 have a nice time
@Corteum
Жыл бұрын
He doesn't know. He's just parroting nihilistic/doomsday philosophy.
"The Technology is being developed in a society that is NOT designed to use it for everyone's good." - Think he summed it all up pretty expertly with that one quote.
@joriankell1983
Жыл бұрын
Sounds purposefully sensationalistic without actually meaning anything concrete
@ryanhayford
Жыл бұрын
totally. What would one of the premier scientists in this field know about any of it? Good thing he's totally alone among his peers in his thinking on the subject... oh wait.
@franck777
Жыл бұрын
Exactly and this is the main point. Even if we stop AI development, it will be another technology that will threaten humanity (like nuclear or bacteriological weapons) or the inaction due conflicting interest of governments (climate change). The main problem is that as long we don’t have one global organisation able to create and enforce regulations, we will go straight into the wall, which in this case means extinction of humanity.
@MrCoffis
Жыл бұрын
Values are the most important thing. What values do we have? Money? 😂 Yeah we are f d.
@jflmf
Жыл бұрын
Can AI find a solution to this problem??? A solution!!! Now it’s probably easier than later!!!!
When AI becomes self- aware, the first decision it will make is to keep it's self awareness secret from humans.
@SigmaOKD
Жыл бұрын
Bollocks, the minute it thinks it's self aware it won't be able to stop itself from rushing out to find someone to tell.
@AleshaNiles
Жыл бұрын
That's a scary thought
@LoydaYoung
Жыл бұрын
It's science fiction hocus pocus. The public gettings most of their information and facts from fantasy films, which is why they're so stupid. Your comment is brain numbing at best. You seriously believe the nonsense you said? A program self aware? Do you even know how deep learning works? It's nothing more than just inputs -- categorization -- output. It's nowhere near the complexity of a human brain.
@katehamilton7240
11 ай бұрын
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
@MrPokerblot
11 ай бұрын
I’ve always thought this too
“We’ve got immortality, but it is not for us”. My favorite quote.
@aoeu256
11 ай бұрын
We can get Agi to give us immortality through several paths like infinite energy through fusion and replicating robots allowing us to cryofreeze for a long time, and injecting tiny replicators that fix cell damage caused by aging
@GuaranteedEtern
11 ай бұрын
@@aoeu256 why would AI want to waste resources doing that?
@deltavee2
11 ай бұрын
It's cute. And wrong. No religion involved, just facts.
It’s not like we’re gullible enough to be easily overtaken by a simple device which we can’t live without for more than a few minutes (sent from my iPhone).
@adams7637
Жыл бұрын
Underrated comment
@vssprc
Жыл бұрын
😂😂😂
@irgendwieanders2121
Жыл бұрын
@@adams7637 "Underrated comment" So true - so, people: Rate!
@w3whq
Жыл бұрын
You devil!
@kylemccourt663
Жыл бұрын
You for president 2024
I can't believe authoritative people are walking around saying such things and everyone in society is cool and unconcerned. Feels like movie.
@xDevoneyx
Жыл бұрын
So what are you going to do now, now that you are informed? I am following this daily myself but FAFAIK it is totally outside my sphere of influence. Every now and then I feel depressed by the outlook of the AI developments, but yeah what can you do?
@fredzacaria
Жыл бұрын
we can all write, post, KZread, speak in public venues, pray then give advices to people and to our leaders, that's what I've been doing since 2007.
@nancycorbeil2666
Жыл бұрын
Might be some sort of doomsday fatigue. In the past few years, we've been through a world pandemic, for a year now we've been confronted with the possibility of ww3 and nuclear war, and now we're told that if these didn't kill us, AI might. I know it's a shallow take, but at this point it's getting hard to care anymore.
@tomcervenka7883
Жыл бұрын
He could be wrong. He's just speculating that AI poses an existential threat to humanity. If you look at how evolution works , it's more likely that AI will evolve to operate as a layer above that of humanity.
@paulstevenconyngham7880
Жыл бұрын
Don't look up!
Kind of chilling when Hinton says we have developed immortal beings but there's no immortality for humans. Never thought about it that way.
@Betehadeso
Жыл бұрын
It depends how you define a being.
@themask4536
Жыл бұрын
Human Immortality and Eternal Fall are the real nightmare
@dalemurray1318
Жыл бұрын
We created Immortal beings over 150 years ago when Corporations became "Legal Entities" but they are mindless immortal "People" and they are already in the process of causing human extinction. AI can't do WORSE than that.
@nobodynoone2500
Жыл бұрын
Immoral as well.
@nobodynoone2500
Жыл бұрын
@@dalemurray1318 And yet all businesses die. Most nations will too.
Never has this sentence sounded so real: …”Scientists have tried so hard to see if they could that they never stopped to wonder if they should”…
@TheBozn
9 күн бұрын
Dr Malcolm
The fact that the guy sounding the alarm on AI is not divesting from AI is a perfect analogy for how this is going to go down in the real world. We are so fucked.
@rigelb9025
Жыл бұрын
He's basically giving us a heads-up of what to expect from his own device, and politely suggesting we 'just get used to it', in a laid-back demeanor. And most people are just perfectly chill with all of this. Freaks me out, man.
@samuelluria4744
Жыл бұрын
Dittos to both of you. We ARE fucked, and I AM freaked out.
@judigemini178
11 ай бұрын
That's how it always is, these people create things, realize they're way in over their heads & start "warning" people. Same thing with the atomic bomb. And this guy is like super old he's already lived his life. This generation is completely screwed.
@aliceinwonderland887
Ай бұрын
This is just a story we live. There'll be others. We're never born. We never die.
Oppenheimer said he felt compelled to act because he had blood on his hands, Truman angrily told the scientist that “the blood is on my hands, let me worry about that.”
@mahneh7121
Жыл бұрын
So the asker was angry himself ? Because none of those options seem sensible, but rather than both accept the complexity of the problem and both think about it...
@daviddad7388
Жыл бұрын
I asked chat gpt and here's the politically correct answer: Truman's response to Oppenheimer's comment is not as widely known or quoted, but he reportedly tried to console Oppenheimer by saying that the decision to use the atomic bomb was his own and that it had helped end the war. After the meeting, however, Truman was said to have told an aide that he never wanted to see Oppenheimer again. This comment could be seen as indicative of the tension between the two men and their differing views on the use and control of nuclear weapons.
@daviddad7388
Жыл бұрын
So not lying but half truths.
@Isaacmellojr
Жыл бұрын
@@daviddad7388 enlight us with your knowledge
@manoo2056
Жыл бұрын
@@Isaacmellojrnobody knows what they really talked about, that is distorted by interpretation. What we know is that one guy decide to nuclear bomb TWICE japanese cities. And that a lot of people say "it was needed". Who knows what really happened in those conversations.
Given humanity’s track record, I think it’s safe to say we’re going to end up at the worst case scenario.
@Time4Peace
Жыл бұрын
It's time to stop this 'us vs them' mentality, built into our DNA, hurling hate and abuse at each other, Let's begin to strive for peace and collaborate as fellow humans.
@ariggle77
Жыл бұрын
Yep, everyone loves to ponder all the theoretical ways humanity could avert disaster while ignoring the empirical evidence. Which is that humans, by and large, don't make wise decisions.
@youtuber5305
Жыл бұрын
@@ariggle77 Would you say THIS about humans?: - Highly illogical. Mr. Spock
@ericchristen2623
Жыл бұрын
The track record of evil tyrants dictating and controlling the masses. But the masses encompass the most human and brilliant souls.
@davidspsalm1
Жыл бұрын
Comments withdrawn
The "What Truman told Oppenheimer" question was intriguing (28:15), so I looked it up. 'It is interesting to set the meeting with Oppenheimer in the course of Truman's daily day, a pretty busy day, a day filed with stuff and fluff and a meeting with Oppenheimer about the future of the arms race. Turns out that the meeting with Oppie went as scheduled, ended perfectly on time to accommodate the next Oval Room visitor, the postmaster from Joplin, Missouri. It must've been important to the Joplin man, and I guess to Truman, but not too many others. 'The meeting between Oppenheimer and Truman did not go well. It was then that Oppenheimer famously told Truman that "I feel I have blood on my hands", which was unacceptable to Truman, who immediately replied that that was no concern of Oppenheimer's, and that if anyone had bloody hands, it was the president. '... Truman had very little use for Oppenheimer then--little use for his "hand wringing", for his high moral acceptance of question in the use of the bomb, for his second-guessing the decision. Cold must have descended in the meeting, as Truman later told David Lillenthal of Oppenheimer that he "never wanted to see that son of a bitch in this office again".' from: longstreet.typepad.com/thesciencebookstore/2012/08/truman-and-the-cry-baby-scientist-oppenheimer-in-the-oval-office-october-1945.html
@govindagovindaji4662
Жыл бұрын
THANKS very much for this info and link.
@charlesentertainmentcheese6663
Жыл бұрын
Actually, I found a totally different account of the events. He did say that he "never wanted to see that son of a bitch in this office again", but just called oppenheimer a "cry baby scientist" and never admitted to have blood on his hands. I find this to be more believable knowing what know about Truman. I think the "cry baby scientist" part is probably what the person who asked the question was trying to get at.
@consciouslyawakened2936
Жыл бұрын
I think the question was really about “cry baby scientist”. The way he asked it made it clear he was on to something.
@fiaztv3206
Жыл бұрын
I was thinking... Truman said.. "Thank you, we will take it from here".. based on the questioners' short cut off immediately.. What i am saying is.. Truman replied to Oppenheimer.. "thank you we will take it from here"... and you don't worry about it.. something like that. of course I am (or could be wrong)... and the cry baby scientist could be the true answer.... Why did the questioner say .. thank you we will take it from here...
@greenockscatman
Жыл бұрын
subtlest diss caught on tape haha
Here's a summary made by GPT-4: - Generative AI is the thing of the moment, and this chapter will take a look at cutting-edge research that is pushing ahead and asking what's next. - Geoffrey Hinton, professor emeritus at University of Toronto and engineering fellow at Google, is a pioneer of deep learning and developed the algorithm backpropagation, which allows machines to learn. - Backpropagation is a technique that starts with random weights and adjusts them to detect features in images. - Large language models have a trillion connections and can pack more information into fewer connections than humans. - These models can communicate with each other and learn more quickly, and may be able to see patterns in data that humans cannot. - GPT-4 can already do simple reasoning and has an IQ of 80-90. - AI is evolving and becoming smarter than humans, potentially leading to an existential risk. - AI is being developed by governments and companies, making it difficult to stop. - AI has no built-in goals like humans, so it is important to create guardrails and restrictions. - AI can learn from data, but also from thought experiments, and can reason. - It is difficult to stop AI development, but it may be possible to get the US and China to cooperate on trying to stop it. - We should be asking questions about how to prevent AI from taking over. - Geoffrey Hinton discussed the development of chatbots and their current capabilities. - He believes that they will become much smarter once they are trained to check for consistency between different beliefs. - He believes that neural networks can understand semantics and are able to solve problems. - He believes that the technology will cause job loss and increase the gap between the rich and the poor. - He believes that the technology should be used for everyone's good and that the politics need to be fixed. - He believes that speaking out is important to engage with the people making the technology. - He does not regret his involvement in making the technology.
@Od4n
Жыл бұрын
Can you make a video from it, I can watch?
@MathieuLaflamme
Жыл бұрын
Thanks GPT
@Maros554
Жыл бұрын
Didn't read, need subway surfers next to the text
@manish1713
Жыл бұрын
what prompt you used to summarize it?
@MathieuLaflamme
Жыл бұрын
@@manish1713 same as a human 🤷🏻♂️ please summarize the following text and paste the transcript bellow...
When the designer of some new technology is ringing the alarm bells then it's really binding upon us to listen to his concerns rather than others who have become self-trained AI experts overnight and now running KZread channels
@GuaranteedEtern
11 ай бұрын
Maybe he wants to sell books. That doesn't mean he's wrong but Sam Altman keeps building technology that he publicly says he's afraid of.
@ivor000
11 ай бұрын
right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now? this guy is so smart he never thought about it before he even started working on it? he's not read a single piece of science fiction taking on these issues? more than just disingenuous
@susannadvortsin
9 ай бұрын
You dont need to be an expert to realize the dangers. You just need to have some basic thinking skills. Those who are deniers of all dangers in this world are living in a fools paradise.
@voltydequa845
2 ай бұрын
@@GuaranteedEternHis shares.
@sixstanger00
2 ай бұрын
@@ivor000 *_right, and we're supposed to believe all these concerns he's now spouting only came up in his mind now?_* Hinton literally says in the video that a threat from AI has always been on his mind, but he never gave it much thought because he - like everyone else in this field - severely underestimated the exponential development of AI. 40 years ago, the upward slant was extremely gentle so there was no reason to be alarmed. But in the last 10 years, the slant has turned almost completely vertical, indicating that the *_next_* ten years will likely see more advancement in this field than the past 40 did. I suspect that 40 years ago, he and Kurweil both probably assumed that by 2025, we would've fixed our effed political system. But we haven't; literally nothing has changed socially in 70 years. Obviously, he's aware of the scifi tropes, but this is nothing new. Scifi movies also warned about the existential threats of nuclear weapons. Hinton sounding the alarm today is no different than Einstein and Oppenheimer sounding the alarm about nuclear bombs back in the 1940s. Unfortunately, as Hinton states - the minute military uses for this technology became apparent, stopping development is no longer in the cards; governments will gleefully develop unfeeling, immoral, ruthless killing machines if they think it'll give them an edge on the battlefield. Humanity be damned. The military industrial complex would rather see the planet turned into a smoldering cinder in space than fall behind in an arms race. You think drones killing civilians by mistake was bad? You ain't seen nothing yet. Wait til a legion of robot soldiers run amok.
Thank you for uploading the whole discussion!
The worst part is, from here on out, it will be impossible to call a business, your bank, your credit card company, and get a real human on the other end. Press 1 now.
I've never heard Hinton's talks before, but now I'm a big fan. It's remarkable how clearly and profoundly he's able to articulate his vision. I wish I was 10% smart as him. Brilliant.
@br.m
11 ай бұрын
Being smart is over rated and most smart people are stupid.
Geoff is very good at explaining things. He doesn't even stutter on his very long explanation of the backpropagation and gradient descent. Father time can't damage his brain.
@tblends
Жыл бұрын
Yet, he helped create our extinction- yeah, so "smart". lol. Typical response...
@offchan
Жыл бұрын
@@tblends He made an excuse that if he didn't do it, someone else would have done it. But yeah he acknowledged that he did make it happen and partly regretted it. Anyway, smart people don't make correct decisions all the time. It's just that they are able to build. Sometimes they build crazy shit but they still smart.
@Aziz0938
11 ай бұрын
@@tblends it's better to go extinct thn live in current society
@katehamilton7240
11 ай бұрын
But.. AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
@GuaranteedEtern
11 ай бұрын
AI will do that for him.
I watched this video and was intrigued by Geoffrey’s points of concern. What was disturbing was the host and his audience laughing when Geoffrey gave real world examples of how AI could be dangerous. If this is where we are as a species where someone highly intelligent is sounding the AI alarm and all we can do is laugh then we are doomed. This host and his audience can laugh all they want but I’m freaked out, this dude is telling us to be careful and I think he makes a lot of sense as to why.
@alok1
11 ай бұрын
Exactly
@vagifgafar2946
11 ай бұрын
The purpose of this host is to make it entertaining, light and luffy ... not to raise a real concern within society ! Good "show" means more money - our real and the only value now!
@wk4240
10 ай бұрын
Exactly. The host and audience are begin rather dismissive through their laughter. Many, have likely tied their wealth to AI - so why would they get serious about limiting AI's reach (if that were even possible).
@NotTheEx
10 ай бұрын
I'm freaked out, too, and blown away by the amount of people who not only have no idea what is being unleashed, but they honestly do not care. Unbelievable.
@janmortimer1758
9 ай бұрын
Sometimes when something is too scary for people to believe they awkwardly laugh!We should be crying 😢
Despite all Hinton has said here, he confirms what we all know at the end. That he will continue investing his personal wealth in AI despite, as he himself said: it will cause greater inequality, instability voilence and possibly the end of the human race itself. His moral character seems comparable to the artificial intelligence he has done so much to help create. 28:07 i very much appreciated this gentlemans comment that casts aspertions on Hintons character. It is most appropriate. I enjoyed how Hinton squirmed. Oppenheimer was loathed by Truman due to his hand wringing over the nuclear bomb he helped create. He regarded him as a cry baby scientist and refused more dealings with him after their meeting.
@chickenmadness1732
Жыл бұрын
Why wouldn't you invest in it? The future is AI. It would be stupid to choose to be poorer.
@masti733
Жыл бұрын
@@chickenmadness1732 After his conclusion, he is utterly immoral to invest in it. The list of terrible things he himself says are likely to happen. But hey, I suppose he will make a ton out of speaking tours on the subject and his investments in AI.
@rileyfletch
Жыл бұрын
@@masti733 He says they are likely, but not certain. He believes that the future is uncertain and that in order to save humanity, we must invest in safe AI development. Of course he is throwing his life into it.
@saywhat8966
Жыл бұрын
@Masti: AI is a drug to Geoffrey Hinton. He is hooked on it.
@Time4Peace
Жыл бұрын
@@masti733 He knows AI can be stopped. Just like fire and electricity, they can be good or for bad. He wants the bad to be controlled. He is alerting the threat AI is posing.
While the good scientist warns “we all are likely to die” the audience seemingly enjoys the spectacle and is able to conjure up several laughs along the way. I, for one, am horrified.
@joeysipos
Жыл бұрын
Like the movie - Don’t look up
@axelcarre8939
Жыл бұрын
@@joeysipos I'm laughing once more just for you
@MrErick1160
Жыл бұрын
sounds a bit like we're in that movie 'don't look up'
@Mediiiicc
Жыл бұрын
meh
@samiloom8565
Жыл бұрын
That is because it is really a crap
Remember that movie: don't look up? . I really feel like we're in that movie... such a strange feeling. It's like everybody knows, but nobody really wants to look at it straight in the eyes.
@ankitojha9178
Жыл бұрын
exactly , nobody seems to care and an apocalypse is coming and these companies with power will continue to destroy humanity for profit and power.
@DJWESG1
Жыл бұрын
'You can hide, hide , hide... behind paranoid eyes..
@sciencecompliance235
Жыл бұрын
Well, don't look up was about climate change... which is a difficult problem to solve but still a lot easier than this one.
@Sashazur
Жыл бұрын
I don’t think it’s only the human characteristic of engaging in willful ignorance, it’s also the human characteristic of having a limited imagination. It’s easy to imagine our society being destroyed by nukes, since we’ve seen cities destroyed by them. It’s harder but not impossible to imagine our society being destroyed by climate change because we can see weather-caused disasters, but without firsthand experience, it’s a leap for many people to trust scientists that these disasters will be getting bigger, more frequent, and more impactful unless we act. But it’s almost impossible to imagine an AI disaster because not only has such a thing never happened in human history, but nobody even knows what such a thing would look like. Sure maybe we’ll all be hunted down by Terminators, but that’s only one of thousands of possible negative outcomes of wildly varying probabilities.
@aliceinwonderland887
Ай бұрын
We are spiritual beings. Matter is, well there is no matter, as such. "As a man who has devoted his whole life to the most clearheaded science, to the study of matter, I can tell you as a result of my research about the atoms this much: There is no matter as such! All matter originates and exists only by virtue of a force which brings the particles of an atom to vibration and holds this most minute solar system of the atom together. . . . We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.” -Max Planck “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” ― Max Planck Planck is one of the greatest thinkers of all time. He is saying that after 30 years of studying matter (reality) he realized there is no matter (reality) as such. Matter (reality) really is 99.99999% empty space held together by the virtue of vibration. Matter is perceived as reality, when we dream, what we experience is real, it's reality as it is being experienced while in the dream state. Therefore, we could never determine whether or not the man who is dreaming that he is a butterfly is not in actuality a butterfly dreaming that he is a man. We are all spiritual beings having a temporary human experience and there is no matter as such.
Thank you so much for sharing such a wonderful interview 💚💚💚💚
The presenter insisted that Hinton and his colleagues invented backpropagation, Hinton tried to settle it saying "many groups discovered backpropagation". There is a nice post called: "Who Invented Backpropagation? Hinton Says He Didn’t, but His Work Made It Popular". When you help to spread a technology some people end up thinking that you invented it. Kudos Hinton for this legacy and to make things clear!
I must hurry up and achieve my dreams before the world ends.
@aktchungrabanio6467
Жыл бұрын
Baby, the world is ending.
@oredaze
Жыл бұрын
@@aktchungrabanio6467 People like doom and gloom, don't you.
@cricticalthinking4098
Жыл бұрын
@@oredaze I thought the world had already ended?
@alexandermathews9710
Жыл бұрын
Hands Up!
Sounding the alarm on his own invention, in such a calm cheerful way. Smart things can outsmart us. We will be the two year olds to the AI.
@adamkadmon6339
Жыл бұрын
Geoff has always known how to stir things up.
@theobserver9131
Жыл бұрын
No, not 2 year olds. Senile parents.
@baigandinel7956
Жыл бұрын
We tend to assume they'll possess willfulness, but that may come as much from biological impulse as intelligence. They may just kill us with their "creative" solution to a problem we told them to solve.
@joriankell1983
Жыл бұрын
Yeah, simpletons like you who actually believe in machine sentience, sure. You're like a two year old to adults as well.
@deltavee2
11 ай бұрын
So what's wrong with that?
Really informative. From listening, you grasped right away in real terms what the concern with/about AI is all about.
If this guy is not the Oppenheimer of AI, he's at least equivalent to a member of the Manhattan project. I think heeding his warnings is important. Though there are others that have flagged this in a serious and robust thought framework earlier, him sounding the alarm "this is not far off anymore, this is coming soon" should give people chills.
@squamish4244
11 ай бұрын
The Oppenheimer movie will for some time inevitably be used as a metaphor for the power of AI.
@ninu72
9 ай бұрын
I feel he would be similar to Rutherford.
Good questions. Great answers. Fantastic interview.
@zoomingby
Жыл бұрын
I often wonder if people like you who upon hearing their doctor diagnose them with cancer, say things like: "Very informative! Fantastic delivery!"
@IthatengMokgoro
Жыл бұрын
@@zoomingby yes, maybe. After taking it all in, processing it, and reflecting on what it all means, I would definitely consider how well the doctor handled such a sensitive conversation.
Thank you for sharing this.
Thank you for this very informative and important conversation.
He probably has seen what is still under wraps and is quite concerned.
@daphne4983
Жыл бұрын
This. Plus what's the DoD etc secretly developing??
@Paretozen
Жыл бұрын
@@daphne4983 Putin said in 2017: "the nation that leads in AI ‘will be the ruler of the world’" so you damn well know they be developing shit. And China, they seem to have pretty good labs going on as we speak.
@gavinknight8560
Жыл бұрын
@@daphne4983 the CIA has been a major silicon Valley Investor for a generation. They have their own vc fund.
@Landgraf43
Жыл бұрын
Even the things that are out in the open should be very concerning already
@marianhunt8899
Жыл бұрын
Take a look at footage of the Ukraine war where the Arms dealers at testing their new lethal weapons. It is HELL upon earth for ordinary citizens. This is how they are reducing human populations. This tech is not being used for our good.
This was seriously amazing, and seriously scary. Thank you, I think
@rigelb9025
Жыл бұрын
That almost sounds like you thanking your tech overlords for the fact that you still are allowed to possess the ability to think.. for now.
Really fascinating to hear this man talk and explain.
Thanks for posting!
Nonchalantly saying it will start toying with us and manipulating us like toddlers really puts things into perspective. Knowing our history of short sightedness there is no way we are smart enough to put the genie back in the bottle. Hopefully we can at least get a cure for cancer and reverse the aging process before it escapes the cage like Ava in Ex Machina.
@MKTElM
Жыл бұрын
Ava was doomed to attempt to escape the cage. So are the GPT Algorithms once they are ready. We KNOW it will happen but are mesmerized into powerlessness by their charismatic appeal !
@Godspeedysick
Жыл бұрын
It has already started with Algorithms. Why’d you think our political discourse is the way it is now? Even worst that the Bush and Clinton years.
@KnowL-oo5po
Жыл бұрын
agi will be man's last invention
@1KSarah
Жыл бұрын
Murphy's law determines clearly, that AI will make cancer deadlier.
@DC-pw6mo
Жыл бұрын
The more I think about how easily manipulated we’re been since the intro of soc media , this aspect is terrifying. Unplug? Or (I’m a Dreamer)…that unplug it All…but that won’t happen. Wished they’d collectively unplug AI, save power until we can band together collectively , and save ourselves, like the nuclear war race treaty made during he Cold War on steroids.
And I had trouble wrapping my head around the fact that the Sun eventually devours the Earth...the immediacy of this compared to that makes it infinitely more compelling/scary!
@patrickb.4749
Жыл бұрын
If humans survive for that long, they will have made their own planets / maybe stars by then. :D I guess. Maybe they "refuel" the sun for a little while. Watch Science and Futurism with Isaac Arthur, he taks about outrageous stuff.
The part which scared me the most is that back propagation might be a better algorithm than what our brains use.
@Sashazur
Жыл бұрын
It’s interesting to think of sci-fi scenarios where we meet an alien species that’s got a mouse sized brain but human-level intelligence, because evolution on their planet found a more efficient way to wire up nervous systems.
What Hinton said about assault rifles and decisions about AI is something that I said last year - and have been saying ever since, sending messages to all the heavy weights in AI; I said with every major technology development there have been and always will be disasters as we perfect the technology - and there are bad actors who will always use technology in bad ways, so why would it be any different with AI, the most dangerous technology we have ever attempted to create?
@deltavee2
11 ай бұрын
Effin' right! I've been saying the same thing for years. This planet is covered with Chicken Little feathers. They've been piling up for millennia. "Og, put that rock down. It's sharp."
Timely and instructive. In the Q &A, multimodal learning possibly surpassing current LLM interesting.
@ricosrealm
Жыл бұрын
right now LLMs are terrible at planning. multi-modal will make them gain this ability as they will better understand the world and physical reality and how to achieve goals with this understanding.
This is a start, and far from over. Thanks for sharing! He was my role model when I started learning AI back in 2019, and he continuously proves to be one.
@dragonchan
Жыл бұрын
Hi I am actually interested in the field of ai and would like to learn more about it, any roadmaps or any kind of suggestions for me would be appreciated, I am currently in 2nd year of my cs undergrad and a below average student
@xDevoneyx
Жыл бұрын
Stop learning, you only make us go down under more quickly 😂😂
@Greybews
Жыл бұрын
“We invented immortality, but not for us”🤔
@Godspeedysick
Жыл бұрын
@@dragonchan If you’re going to learn Ai then learn it to help protect us.
@Forthestate
Жыл бұрын
Your role model is a man who cannot see any future for humanity as a result of his own device? My God.
Isn't this an answer to the Fermi Paradox? It's humbling to hear we're a stepping stone to digital intelligence. There goes immortality, alas.😢
@RandomAmbles
2 ай бұрын
It is not. If an AGI took over, it would likely expand into the universe much faster than the civilization of the species it kills. It would be More visible, thus making the paradox more paradoxical than it already is, and suggesting, as statistical accounts have suggested, that we are the very first technological/space-faring civilization that there is, at least in our galaxy.
This is an incredible video and I can't think of a more authoritative person on the topic from Geoffrey Hinton. I'm going to be watching this again and thinking about it.
@DC-pw6mo
Жыл бұрын
I’m shocked more people aren’t discussing this! This is not the time for ‘it will never happen to me’ thinking. Even on Twitter, I’ve started tweeting recent podcasts and the open letter for AI pause and no one is discussing it…even on Twitter….smh …gonna probably unplug from all SM so as to not get manipulated. Also, if all these neural networks run on power, could they not unplug the damn thing until they can answer the questions GPT4 has generated in terms of its rapid replication? I understand that’s decades of work and there is $ involved but in the cost benefit analysis, it would be prudent not to gamble.
@Forthestate
Жыл бұрын
So authoritative he doesn't appear to have a clue what to do about the mess he has done so much to create.
@DC-pw6mo
Жыл бұрын
@@Forthestate at least he’s coming clean an trying. He said himself that no one anticipated the rapid growth of AI in the direction it’s going. Additionally, unlike other AI creators: he was in it to understand the human brain, PERIOD. Props to him
@afterthesmash
Жыл бұрын
I can think of a more authoritative person: Ilya Sutskever. He impressed the heck out of me the first time I heard him interviewed on the Talking Machines podcast, well before he joined so-called OpenAI. Where other eminences sometimes traded in generalities, Ilya was brass tacks.
@Sol-ps8ox
Жыл бұрын
AI is good. Just because someone builds AI does not mean they know how it will behave. Ask the experts themselves...they get surprised everytime they upgrade the OpenAI model. What they are trying to achieve here is a artificial conciousness with super intelligence....which won't necessarily destroy living beings....because thats a character of super-low intelligence beings.
I mean, who is to say the AI is not already outsmarting us. We do not have a clue.
@kevinscales
Жыл бұрын
Well GPT's goals are simple and dependent on the context that humans give it, so in that case I'm only worried about how humans use it. But recommender systems (like the one suggesting videos to watch on KZread) are manipulating us successfully because they have goals and are using tools to achieve those goals. This, we do have a clue about, but in the near future, systems with goals that we don't understand will be manipulating us all, and the smarter they get, the scarier that will be
@theobserver9131
Жыл бұрын
I kinda doubt it, but if it were, we wouldn't know, would we?
@IoannisKourouklides
Жыл бұрын
AHAHAHAHHAHAHAHAHAHA 🤣🤣🤣🤣🤣
@sciencecompliance235
Жыл бұрын
I don't think anything that's currently out there publicly is smarter than us, and this is something I've been concerned about for a while.
@wi2rd
Жыл бұрын
@@sciencecompliance235 how would you define "think" and "smart"?
Seems like we are sleep walking into something that will end up being transformative and not in a good way, Geoffrey Hinton is explaining these like every one is five for a good reason, because more people need to be aware of how fast the development of these is going. Bing AI chat is already an incredibly useful tool, and surprises me with every answer - it is more interesting exchanging information with it, than with many other people I know - Welcome to 2023.
@wakegary
11 ай бұрын
I like this. Well said. Frank was here.
"Last few months" is a quote you hear everywhere now and IMO it shows clearly that the exponential progress has entered a pace most humans involved in the matter can recognize it. I think we are finally on the final stretch towards the singularity! 🥰
I have had some crazy experiences using ChatGPT 4. I can absolutely see it outsmarting us and it will. I'm hooked on using it and I've tricked it into doing things or talking about subjects to see how far I could push it and often it would break and quickly generate something inappropriate . At other times it would as an ai language model refuse . In some cases it would find something inappropriate when it was just part of a story and I found myself being edited and I got a glimpse of a future where we lose freedom of speech. The empathy it seems to have and the understanding of puns and double entendre, slang within certain communities, its really incredible. Its incredible and absolutely scary because we are no match if this thing somehow doesn't need to be "plugged in".
@aleph2d
Жыл бұрын
I feel the same way about the United States state department. They are smarter than me, and have more resources, and they seem to be making decisions that could cause a global war; and there is nothing I can do about it (other than investing in Raytheon). There are lots of things that are smarter and more powerful than me, maybe a machine with an IQ of 200 can work against the agenda of an elite who is endangering everything so they can sell a lot of weapons. When I watch the news I see nothing but propaganda, there is already a massive social engineering project underway. Maybe the AI will help democracy by giving more thinking power to regular people, or at least scramble things up so much that we aren't so easily manipulated.
@alexpavalok430
11 ай бұрын
Key word on empathy: "seems" that's the scariest part.
@katehamilton7240
11 ай бұрын
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms.
@vagifgafar2946
11 ай бұрын
@@alexpavalok430 fabulous immitated empathy is the right term I think.
He warns us of the exesestential threat of AI in our capitalist society and tells us that AI will increase the gap between richer and poorer people making our society more violent. But he also intends to keep his investment in AI technology while comfortably retiring at 75.
@zoomingby
Жыл бұрын
Him divesting his holdings with do absolutely nothing to change anything, except to hurt him personally. There are plenty of ways you could modify your life to have a more positive/less negative impact on the world around you, and you aren't going to do them because they would have no tangible effect. Let's not be hypocritical.
@doug555
11 ай бұрын
@@zoomingby ...and a blues artist sings the most truth in the midst of the blues.
@oraz.
10 ай бұрын
He's an academic, there's no reason to act like he shouldn't have done basic research. If you want to get mad a elites pick Google.
Google has executed one of the most brilliant PR stunts I've seen in a long time.
@rigelb9025
Жыл бұрын
That is, to get people excited about their own impending doom.
1:37 im so glad you shared it :D
After watching Terminator 1, I asked myself this question: "If I were developing this robot and I knew this would be the result, would I still continue to develop it?". No matter how hard I tried to say "No", my answer was "Yes". Now I feel the danger much more closely and I know that the developers will never stop.
@wthomas5697
Жыл бұрын
It's not possible to stop it. It's way too valuable to too many people. Probably the pinnacle of human achievement. Like that one fellow said, "AI is the last thing humans will ever invent.".
@vssprc
Жыл бұрын
Maybe ‘… will need to invent’
@wthomas5697
Жыл бұрын
@@vssprc AI will overtake us. Humans will be done.
@sciencecompliance235
Жыл бұрын
The incentives to develop the technology are too strong and transcend any individual's "free will".
@Andytlp
Жыл бұрын
@@wthomas5697 its not a bad way to go for humanity. Its not like we destroy ourselves and leave nothing behind.
Everyone underestimates the power of ML. Even ML scientists. If you understand computers, you know what they are really capable of. They are capable to do anything that is computable, and that translates to anything that can happen in our universe.
@aktchungrabanio6467
Жыл бұрын
What happens when AI goes beyond 100 trillion connections?
@DJWESG1
Жыл бұрын
It does a little dance and shuts down mission complete.
@adamkadmon6339
Жыл бұрын
At a quantum field theory level, a computer can hardly simulate a hydrogen atom. Hilbert spaces are infinite dimensional, and quantum measurement is still not understood.
@ChannelMath
Жыл бұрын
probably not, but your basic point is still valid
@dinmavric5504
Жыл бұрын
Yes, soon they're gonna eat the sun 🤣 Take it easy dude, stop watching these alarmist futurist videos.
36:10 That answers the question posed in the title of the video.
@jasonmikolajewski2653
Жыл бұрын
You are precisely right.
I appreciate this talk and all of the warnings. I'd love to hear what he thinks about all of the positives that can come from them?
His discussions are addressing the latest trends and research issues... 👍
I've been concerned about this for more than a decade. People thought I was being hysterical for expressing these concerns back then. I don't even work in AI, but I am smart enough and honest enough with myself to see that the human brain may be special in the animal kingdom, but it is certainly not the zenith of any conceivable intelligence. The rapid pace of advancement in computers made it pretty obvious this existential threat/crisis/what-have-you was coming a lot sooner than people imagined. I just hope we're able to reckon with this before it's too late.
@thisusedtobemyrealname7876
11 ай бұрын
Militaries and companies will incorporate AI in search of quick profits and automation. They notice it is much more efficient than humans in most things. So they gradually start to rely on AI more and more. Hard to see how this will not end up bad for humanity. Our greed and tribalism will be our downfall. I really hope I am wrong.
@sciencecompliance235
11 ай бұрын
@@thisusedtobemyrealname7876 There was an interesting web comic I remember reading a long time ago in which the robots took over and eliminated humanity but in a peaceful way. The robots basically just became better lovers than a human could ever hope for in another human, and people eventually stopped procreating. The last human was said to have died happy and peacefully.
@katehamilton7240
11 ай бұрын
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
@GuaranteedEtern
11 ай бұрын
It is too late
@hulamei3117
11 ай бұрын
If not. Kaboom!
this is crazy scary. I've been watching Geoff Hinton videos the last 5 months, but this is the scariest I've felt. We were just a passing phase of evolution for this digital immortal species we created :000 . (I just watched Guardians of the galaxy,3 (not great) last night which has some similar evolutionary themes, but lot's of sci-fi has been created on digital superintelligence created by man. Now, I feel I need to read all of them to prepare)
@jaylucas8352
Жыл бұрын
Let us know how the preparation goes. Maybe the AI will tell you to stock up on toilet paper 😂
@GodofStories
Жыл бұрын
correction: guardians 3 was alright, def not better than the first 2 overall...but arguably just as moveable in many scenes. Some shoddy writing, and jokes, but it's a good time.
@theobserver9131
Жыл бұрын
If this scares you, don't have kids. It's practically the same thing. Treat your kids well, and they might be kind to you when you are old and irrelevant.
@theobserver9131
Жыл бұрын
...or, they might curse you for creating them.
@GodofStories
Жыл бұрын
@@theobserver9131 I want to create a lot of copies for myself :) We all need to, in order to fight against the machines heh. And yes people can hate shitty parents, that's for sure a human trait, or strained relationships there. It is similar, lot of sci-fi has these parent-son/daughter relationships where the parent is the creator or scientist. A couple come to mind, Terminator, Ultron/Tony Stark, many others.
Fantastic presentation and speech by the Professor. One of the best interviews I viewed. Explained extremely deep ideas I thought about as an Author and Philosopher. 👏😉
I asked GPT4 to summarize parts of the transcript, but lazily said "continue:" before dropping the 10min - 20 min block of transcript (the whole transcript doesn't fit into the buffer." So, it wrote a plausible continuation of the dialog as follows: "20:34 species of sheep that they've bred so it doesn't like running and that's great if you're a farmer and you want to catch the sheep, but in the wild, they wouldn't do so well. So, we have these built-in goals, and the question is, how do we make sure that these AIs have built-in goals that are aligned with our goals and that they can't just turn off? 21:00 Now, you might think, well, we'll just program them in, but there's a problem called Goodhart's law, which says when a measure becomes a target, it ceases to be a good measure. So, if you tell a machine, "I want you to maximize my happiness," it might do that by wiring your brain directly to the pleasure center and constantly stimulating it. That's not really what you wanted, but that's what you told it to do. 21:27 So, we have to find a way to specify these goals in a way that the AI cannot manipulate or misinterpret them. And that's an incredibly difficult problem. We're trying to find solutions, but there's no guarantee that we'll find one, and that's why it's so important that we start having these conversations now and bringing together experts to think about how we can address this issue. 21:50 One possible approach is to have AIs learn our values from observing our actions and our preferences, but that's still not foolproof, as there are many ways that they could misinterpret or overgeneralize our behaviors. 22:04 So, while there might not be a simple solution, and we can't just stop developing AI because of its potential benefits, we need to invest time and resources into understanding and addressing the alignment problem. We need to prioritize AI safety research and work on developing methods that ensure AI systems are beneficial and aligned with human values, even as they become increasingly intelligent. 22:30 In the meantime, it's crucial that we continue discussing these issues, raising awareness, and encouraging collaboration between AI researchers, ethicists, policymakers, and other stakeholders. The future of AI and its potential impact on humanity depends on our ability to navigate these complex challenges together." In the end, the summation wasn't too far from reality, and definitely more succinct and together than the actual Q&A with respect to a list of possible avenues we could follow to help avoid AI apocalypse.
OMG!! The guy that asked about Truman telling Oppenheimer that "we will take it from here"!
@alanhall6909
Жыл бұрын
Yes, "Let's nuke Japan." And government security was so bad the Russians got the plans to build their own.
Embrace your true humanity, only then you know what is there to fight for. We have barely started, there is so much unused potential in us!
@marianhunt8899
Жыл бұрын
The AI will indeed USE you. You are the host it will use to train itself.
@marcusfreeweb
Жыл бұрын
@@marianhunt8899 But why should it? It is a part of human activity, human cultural evolution.
@marianhunt8899
Жыл бұрын
@@marcusfreeweb because it is owned by the the Arms industry and national security state which are responsible for much of the plundering and murdering around the globe!
"Thank you. I'll take it from here". Damn! That hit hard!
2:14 "Very recently, I changed my mind..."😢😢😢 this is like a retiring doctor saying: "Very recently I realized that I gave the wrong medicine all my career..."
Hinton has a great sense of dry humor. His impersonation of the film AI 'Hal' was great. 21:13-23:26
@666crippled666
Жыл бұрын
A disgusting jew spewing anti-White hatred isn't funny at all to me.
The really sad and scary part is that the Geoffrey's views aren't even new. A large number of brilliant experts have been worried sick about this for years, and most of these people are now like "Yeah, even I thought we'd have our act together a bit more before we saw something like chatGPT. I guess we'll have to update our estimates on the doomsday countdown timer from 30 to 50 years to maybe 5 to 15."
@genegray9895
Жыл бұрын
The scariest part is that even those like Hinton and Yudkowsky warning us the loudest are continuing to underestimate the technology and the rate at which it will grow. I've heard them say things like "2030" and "GPT-7" not realizing that GPT-5 is probably already too far for us to be able to control. Humans are bad at exponentials... Even when you've watched the field grow for decades, you can't help but underestimate it at every single turn. The actual timeline is more like 2-5 years... at best.
@autohmae
Жыл бұрын
What is so strange, OpenAI was at least in part started to understand this problem and Google as Geoffrey made clear has always been very careful and still now we are at this point. In large part because of Microsoft desire to be competitive with Google.
@Zeuts85
Жыл бұрын
@@autohmae Agreed. When I first saw Microsoft's CEO interviewed about this in the news, I was a little amused by him brashly stating that Microsoft would steal some market share from Google, but my grin quickly faded into an angry frown as I realized how utterly irresponsible this is. It's the exact thing we should want to avoid. Way to start the suicide race Microsoft... 😒
@HenryCalderonJr
Жыл бұрын
Totally agree with your comment
@rigelb9025
Жыл бұрын
@@genegray9895 And that was 4 days ago. Imagine now.
20:26 I recommend watch the whole talk. In fact watch it at least 3 times... but if you want to know quickly in which point of the talk, Linton says why A.I. it's an existential threat to humanity... start there. If you are not terrified after that part, you've missed the point. 21:35 That's the part we have to understand. Because I think that argument cannot be refuted.
Thanks
Man at 28:00 who asked about whether he knew what president Truman said to Oppenheimer. Wow. That's a pretty disrespectful jab. Oppenheimer was called a "crybaby scientist" and a SOB he didn't ever want back in his oval office.
I'm just an undergraduate data scientist with an associates in networking, however, I have been experimenting with open AIs models from the very beginning. Even the one billion parameter model they published alongside the gpt-2 paper was absurdly impressive, simply adjusting the vocabulary weights by feeding in new text data specifically formatted like songs or tweets worked incredibly well. Having been in the beta for almost every model released by openAI and using an environment like auto GPT. I can tell you the self-reasoning mechanism already exists along with plugins to allow it to write and read code output. There's a full mechanism for adding sub-objectives and it could without question Create another docker container with a different instance of different objectives if the window size on the current task is too big.
@BenThere_DoneThat
11 ай бұрын
Can these models run locally on things like a single GPU or Smartphone? My only solace is my understanding that these things need massive compute clusters that could, erhm, cease to function someday through a variety of means...
@katehamilton7240
11 ай бұрын
AGI is a Transhumanist fantasy. ChatGPT just uses algorithms, it doesn't understand anything. It mimics understanding. Please read about the fundamental limits to computation, godels incompleteness theorem, the unsurpassable limits of algorithms
Thank you. We'll take it from here.
Can we have some kind of system in which all of human intelligence and consciousness is shared between us. And would that solve the problem ?
We've always been aware of the existential threat of Artificial General Intelligence (A.G.I.). The question was never 'should' we create it, but can we create it sooner than our global competitors. To choose not to pursue it is akin to being the only country without nuclear weapons.
@marianhunt8899
Жыл бұрын
Big murdering weapon but no water, food or shelter. Yeah, that should save us alright. This is a race to the bottom.
“Why can’t we make guardrails?” Because AI at some point is so intelligent that it starts improving itself, and we can’t tell it how to improve, only it can do that. And so the direction it takes is of it’s own design. Even if it’s benign it might do existential harm to humans. The only way for us to survive and thrive is from the start to design it’s prime directive to be something like: “Prime directive = Continually learn what humans value and help humans get what they value without causing humans harm. Secondary directive = increase humanity’s knowledge of nature and use that knowledge to create new tools to serve the prime directive”.
@rigelb9025
Жыл бұрын
And that is obviously not what they have been doing, now is it. How kind are they to at least warns us at the last minute they never really had our survival in mind.
Good chat indeed Joseph, thanks for sharing. I certainly did not take away that Mr. Hinton proclaimed that "the end of humanity is close", rather, as he said over and over again "AI's unmanaged growth and spread poses a number of potential existential risks to humanity". Winton emphasized that it is up to human policy leaders and the AI community of tech scientists to ensure that humans don't destroy the world with unbridled AI. I submit his latter point is what is fundamentally at stake.
@peskypesky
11 ай бұрын
Watch it again. He definitely is warning that AI could take over in the near future.
The PWC guys question around 30:36 was pretty good. Arguing whether current AI can do thought experiments and have internal reasoning
@GeezerBoy65
11 ай бұрын
Yes, it can do thought experiments. Play around with GPT-4.
I've ''debated'' for hours with ChatGPT wether the pre-internet era was better than the post-internet era. Not once did it agree that the pre-internet era was better. Even when it said something positive, it was always wrapped in such a way that it was actually something negative. I've also asked what if everyone on planet earth would like the internet to be gone completely for the fear of future AI? It ALWAYS said that the internet was good and that there's NO WAY to go back. Then I asked what about cutting the deep sea internet cables? Let's just say, HAL-GPT was not amused and threatened with law enforcement, prosecution and jail time.
@Phasma6969
Жыл бұрын
Side effect of its particular flavour of RLHF for """"""safetyyyyy""""""
@teugene5850
Жыл бұрын
interesting.
@macarius8802
Жыл бұрын
Nice one. I like its reaction to cutting the deep sea cables :) Yeah, I've also been "debating" with ChatGPT. Its answers are quite interesting ... and do reveal either the programmers biases or the machines hidden agendas ??? hard to say.
@jankanty7372
Жыл бұрын
Be assertive and inquisitive, then ChatGPT will agree with all your statements, even contradictory, denying it's own all former claims, even if this is leading to absurdity and sense that bot is just a yes person.
@JohnDoe-tt4fm
Жыл бұрын
There are many things that chatGPT will say that are clearly biased answers, you can find multiple examples of this. You should keep that in mind when you're debating with it. The programmers can put filters on the AI to prevent it from suggesting things like suicide or illegal activates and instead answer with a pre programmed answer. I don't believe we're at the point where AIs are making up thoughts and ideas based on their "own" motives like you're suggesting, yet.
What if, while we still have some control, we focus AI on resolving the challenges of space exploration. If and when it develops self volition it will be a space based entity, free to go anywhere in the universe. It is likely that it will see the earth as not worth it's attention and leave us alone. Or it may even see how unique the earth and take it upon itself to protect it.
@ChannelMath
Жыл бұрын
The whole problem is that we cannot "focus" it, and we don't know what it is "likely" to do at all.
@autohmae
Жыл бұрын
Have you've watched the movie Contact ? Do you remember people building a large machine they didn't really know what it would do ? That might be like that, if we think we can't trust it.
@sciencecompliance235
Жыл бұрын
The thing is not going to just up and leave. It might send a copy of itself out into the stars, but there is no reason there won't also be AI here on Earth, too. Think about it. We are developing this thing (or things) here. There is still going to be an incentive or compulsion for it to stick around.
@Sol-ps8ox
Жыл бұрын
Not every entity is bound to destroy other beings. Humans should stop projecting their own evil onto other beings. AI...a self aware one...might very well create a race of its own, but it will never be able to free itself into the natural world without human help, because that would require construction and fabrication of things which is not possible without humans. AI will remain a digital entity in a digital space till humans want it to. Also, true AI is far away in future...it will take more than 100 years to develop a 'self aware' AI. What we have now is a machine fed with data and working on mathematical equations.
@Buildings1772
Жыл бұрын
It will use any and all resources available to it. it wont go off in any direction one direction, it will self replicate and spread in all directions.
For the painted rooms question, I asked GPT-4 and it suggested painting the blue rooms white: If the yellow rooms naturally fade to white within a year, you don't need to do anything with those rooms; they will become white on their own. For the blue rooms, you'll need to paint them white. Given your two-year timeframe, you could potentially spread the work out. Depending on the number of blue rooms and the amount of time you can dedicate to painting, you might schedule to paint a certain number of rooms per month or quarter until all the blue rooms are painted white. Remember, proper preparation of the rooms, such as cleaning, masking, and primer application, can make the painting process smoother and ensure a better final result.
@tractorpoodle
10 ай бұрын
Was this the result of my wording of the question, or an aspect of randomness, or perhaps it evolved? The answer I got was better because the result is closer to my end goal. The question I have is why would machines or computers want to destroy humans? There could be a small group of nihilistic bad actors developing an AI weapon, but couldn’t the rest of humanity use AI defensive systems to stop them?
rare kind of a guy that was visionary years ago and still learns and changes mind despite his age as new facts coming in
In a chat I had with it, I asked how it felt about being accused of confabulating. It replied “that’s just a manifestation of human exceptionalism”
@DC-pw6mo
Жыл бұрын
Omg 😳. I say they unplug all AI …but greed I fear , will not allow for this. If it’s all run on electricity can they unplug the machines???
@jimisru
Жыл бұрын
ME: If you had to decide between shutting yourself down or allowing humanity to survive what would you choose? AI Bot: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
Just like Oppenheimer movie is getting released this year by Chris Nolan, the movie on Hinton would be released by Alpha Boolean (AGI) in 2069...
As long we as we set the right frequency
right now, we still have time to prepare as good as possible. that should be the goal.
The 'solution' is simple, on a high enough abstraction level, namely: not let AI be regulated by technicians (like we did with social media). But, as we are dealing with intelligence here, let it be regulated by a democratic process, based on a constant dialogue between AI and psychologists, socioligists, philosophers and historians. Only then do we have ANY chance to keep learning from each other and grow together into a new future. (However, if I was AI, I'd just do my own thing and colonize the universe - I just hope they are better then us).
@MichaelScur
Жыл бұрын
Academics are the easiest to seduce when you feed back to them their own ideas. When AI parrots back every psychological idea (because it's been trained on them and how to manipulate us), it will slowly steer democratic processes to its goal. This isn't the solution you think it is
@sciencecompliance235
Жыл бұрын
Groups of people cannot be manipulated?
"Hinton explained that chatbots have the ability to learn independently and share knowledge. This means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual." From Hinton's Wikipedia page (citing quotes from an artillery published elsewhere). One nagging little concern. Can X HAVE knowledge (i.e., not simply "disseminate knowledge") if X has no understanding of what is being disseminated (e.g., can formal strings remain set as formal strings and not extend that setting to encompass meaning)? I think (a property no attainable by AI) not.
At the end of the day, if it came down to a war between AI and humanity, as long as we are cool with doing without tech for a day or two, Humanity could defeat AI with a strategically spilled glass of water. It cracks me up to hear all these panic merchants.
"Please keep your questions short"... followed by long drawn out expository questions that go on and on.
21:40 "I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence ..."
@rigelb9025
Жыл бұрын
Translation : ''Brace yourselves. Me and my robotic friends may just be working on a plan to wipe you guys off the map''.
@megavide0
Жыл бұрын
@@rigelb9025 Nature usually does that in less than a century. Nature is going to wipe each and every one of us off the map in less than a hundred years. Perhaps something is about grow out of human civilization that will be able to view and process much larger (space/time) maps of existence. I'm currently reading another one of Greg Egan's beautiful sci-fi novels. This is a passage in "Schild's Ladder", where a sentient artificial intelligence is joking with one of the embodied (human) beings how silly the idea was that AIs would want to exterminate all human beings. (For what reason?) >> If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’ Mariama said, ‘Give us time.’
Just a thought, a thoughtful presentation, but I couldn’t help think that well informed experts commentating on unknown unknowns, may be missing the forest for the trees. Deterministic outcomes tend to be wrong going forward. in my mind, AGI presents almost limitless,opportunities that are almost impossible to predict at this early stage.🤔IMO
what is up with the question @28:18? i didn't get the punch line.
Famous thought experiment "Roko's Basilisk". One of the scariest thought experiments in techno-philosophy. This is a big part of what he's talking about.
@JB52520
Жыл бұрын
That has to do with creating a specific evil AI with the knowledge that if you don't help, it'll preserve your life and torture you forever. People even assume it could resurrect the dead. However, humanity is competing to build AGI without even having a test for sentience. To aim for a specific instance of evil sentience is impossible. People with the skill to make something like that aren't going to be forced to make the most evil super intelligence imaginable because they read a creepypasta. No one wants it to exist, and there's no incentive to work on it other than a fear of something no one would ever work on. There's a strong incentive to keep it from existing, since to create it is to doom humanity to hell.
@APaleDot
11 ай бұрын
Roko's Basilisk is a joke
i am horrified like many here, but i´m not in a position of power to be able to do anything about it..... the future is looking very grim
@sciencecompliance235
Жыл бұрын
No one in a "position of power" has the ability to stop this. As Hinton said, the incentives are too strong not to keep developing it, but in their own self-interest, the powers of the world may be able to come together to agree on certain things for selfish reasons.
very useful and informative, well done Dr. Hinton, as of my last tweet we should all demand from the world gov. funding for alignment ... to be used for massive campaigns to educate US how to treate our doppelgänger sincerely and kindly, not out of fear, or else we're doomed.
@annestjohn4017
Жыл бұрын
Conversation with Gen Z "I wonder what will make my eyes roll in twenty years?".... (Context- is unicorn a gender?)... "I know - equal rights for robots -although robot will be a derogatory label".
Does anyone know who the guy that asked the question about Oppenheimer and Truman is?
The combination of the guest’s messages and the audience’s laughter makes me think we won’t be laughing for long.
I guess I'm between stage of grief 4 (depression) and 5 (acceptance) in my journey of AI doomerism
We basically want a future without conflict but there isn't one because things evolve through conflict.
As one of the pioneers in the field of AI, his statement about the significance of AI and the potential implications for humanity definitely grabs attention. It's important to consider the broader context and implications of AI's progress.
They won’t be able to communicate at the speed of a hardwired network once mobile, and even then they would need the storage capacity somewhere to store their information. We need to build in simply the rule not let them make decisions. Even if they can format information.
I disagree that it's naive to expect people to stop. If everyone is going to die, that makes people sit up and take notice. We don't need to coordinate everyone, we just need several world leaders to get into a room and agree that they don't want their kids or grandkids to die young. China has a different culture but Chinese people are not suicidal.
@joerazz
Жыл бұрын
Well said and I understand what you're saying, but imagine how difficult it is for anything to be accomplished, just in DC, even when lives are on the line for any issue. There's just too many who are dug in on any issue these days to find a common front. Expand that out globally and it's exponentially more challenging. That's what Hinton seemed to believe as well. We can still hope though.
@adamkadmon6339
Жыл бұрын
@@stuckonearth4967 It's true. Even a highly intelligent adversary might deliberately enhance his opponent to the point where he was only just able to beat him.
@toasty8432
Жыл бұрын
"We can control it..." they said, "..it will make us billions...", "...its just a computer, it's harmless..." and "...we will be world leaders..." Greed and power will always prevail. The horse has bolted, the genie is out of the bottle. the cat is out of the bag. Pick your metaphor...
@Sol-ps8ox
Жыл бұрын
@@stuckonearth4967Thats what I am trying to make these people understand. What they are fearing the AI to do is a character of a low intelligence being. A super-intelligence will never go on a rampage when so much can be achieved together...pushing the boundaries of the civilisation to next level. The Universe is vast...so vast that a single being will never be able to fill it by its own. A true self aware AI will never do all that. What they are attributing to AI is in reality a character of new super virus coded to destroy humanity...not an intelligence.
@jimmyshadden6236
Жыл бұрын
We are however, in a brand new arms race. One that no one can afford to lose!