EU Parliament Artificial Intelligence Debate - Steven Pinker

Ғылым және технология

Steven Pinker is a cognitive psychologist, linguist, and popular science author.
Panel: Is it rational to be optimistic about artificial intelligence?
Introduced and moderated by Steven Pinker, Harvard University
Peter J Bentley, University College London
Miles Brundage, University of Oxford
Olle Häggström, Chalmers University
Thomas Metzinger, Johannes Gutenberg University of Mainz
October 19th, 2017

Пікірлер: 172

  • @Maaaarth
    @Maaaarth6 жыл бұрын

    Steven Pinker's Keynote - 10:47 Peter J Bentley - 41:33 Miles Brundage - 1:01:33 Olle Häggström - 1:18:44 Thomas Metzinger - 1:34:24 Panel Discussion - 1:47:27

  • @CandidDate

    @CandidDate

    6 жыл бұрын

    For those interested, consider this example: Concept: A thermostat turns on the heat when it senses it is too cold in your space. 1> The thermostat does not turn on the heat when it gets cold because it is wired wrong. >>>rewire it 2> The thermostat senses it is too cold and turns on the heat >>> fine 3> The thermostat is connected to a world wide AGI, senses that it is too cold, but doesn't turn on the heat because it detects that you cannot use any more energy because someone in another distant country needs a lightbulb lit so that person can see to eat. >>>> now what? (I'm assuming the priority goal of the AI is to keep the most people alive rather than keep certain people comfortable.) >>> how can you rewire this AI?

  • @kooshanjazayeri
    @kooshanjazayeri6 жыл бұрын

    i've listened to both Mr bentley and Mr pinker, and their comments are highly uninformed about the unseen dangers as is illustrated in Mr Jaron Lanier's argument about Walmart and amazon use of a.i. and big servers and how it effects people lives in unseen manners, which is bound to show up in social order

  • @citiblocsMaster
    @citiblocsMaster6 жыл бұрын

    1:55:58 His argument for why the alignment problem is not an issue was: "it's self contradictory". "specifying wrong goals is an idiotic thing to do". The point is that any specification of human values that is _slightly_ off will lead to disaster. And also, we still don't know about a human value specification that is provably safe. I doubt Pinker has given much thought to those issues.

  • @marvinvarela

    @marvinvarela

    6 жыл бұрын

    Also, and maybe more important: even if you set goals that seem perfectly aligned with ours, there may be side effects that are not aligned with human goals that we just failed to relay to the AGI. The AGI could easily pursue fair and beneficial goals perfectly well and still use horrible and damaging ways to make them happen, unless we introduce measures to make sure it does not, and at this point, we do not know how to do this.

  • @ManicMindTrick

    @ManicMindTrick

    6 жыл бұрын

    Marvin, that is why will need to rely on the superintelligence to leverage all the things we really wanted to happen if we were smart enough or had time enough to think of them.

  • @MegaHarko

    @MegaHarko

    6 жыл бұрын

    Yeah well... He said a System pursuing a goal disregarding the side effects isn't intelligence. Considering humanity using atomic power (without having taken care of the waste-problem first) and the condition of nature in general (not just focused on climate change) I guess I have to continue my search for intelligent life somewhere else.... I heard Risa is quite nice this time of year. :)

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    "The point is that any specification of human values that is slightly off will lead to disaster. " Then we doomed because human values vary greatly, most humans have values that are at least slightly off the values of nearly every other human. It's not like we are going to tell some supercomputer to "value life" more likely we will build a supercomputer to do certain tasks because we value life. Computers don't have values and there is no clear way to program them in.

  • @helicalactual

    @helicalactual

    5 жыл бұрын

    The computer would/could learn the concept of social wave functions from “action” That result in weight functions that are harmonic/harmonic disharmonic/ harmonic Disharmonic/disharmonic might work.

  • @RichardAveryiii
    @RichardAveryiii6 жыл бұрын

    Great video. Thanks for the upload!

  • @jeffersonianideal
    @jeffersonianideal6 жыл бұрын

    Could there be any positive benefits associated with more autonomous AI technology? For example, would an aware AI have the innate sense to upload this video in HD?

  • @JimTaylor42
    @JimTaylor426 жыл бұрын

    I thought that Peter Bentley's ideas on the limits and possible insurmountable/difficult design problems we face with AI is rather short sighted, both literally, metaphorically and physically. Silicon based artificial intelligence can operate about a million times faster than chemical (human) intelligence. Once AI reaches our level and can alter its own code and algorithms, it will, if we are not very, very careful, solve those insurmountable/difficult design problems and outstrip us in a matter of days or weeks. To me that was the elephant in the room throughout the whole debate.

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    Silicon based intelligence isn't artificial but it's very different than chemical (biological) based human intelligence. There is no reason to believe silicon intelligence will ever reach human level because the scales are different. No matter how much ball bearings improve they will never be as good as oranges because they are very different sorts of things. Human minds are very different from computer code. One characteristic that biological intelligences have that silicon does not is motivation. Computers don't care. Even if humans could create an algorithm that could create an algorithm better than itself it wouldn't unless it was build to do so. Another thing humans do that computers don't is create meaning. What does "better algorithm" even mean? better at what? What is better or worse will be defined by the humans. One thing algorithms do better than humans is follow a very precisely defined set of rules and operations because that's what algorithms are. With effort and concentration humans can act algorithmical but often our minds wander off, we think about other things, we wonder about the world, we want things and we explore because we are curious. Computers aren't curious, they're obedient to a fault.

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    Mass Extinction Motivation is the desire to do something. It came from the evolutionary process, creatures with desires and wants were more like to pass on their genes than those without. Meaning is similar, social creatures that are able to pass on knowledge through shared meanings are more like to survive. That is meaning and knowledge increase group survivability. (While those that didn't were more likely to face grim mass extinction. ha ha ) What motivates ants, I don't know I'm not an ant, but we can observe ants and create a testable theories. I suspect certain chemicals motivate ants to do things like follow a trail or attack other ants. I don't know if ants have "meaning", they may not have that capacity. I'm talking about science aka natural philosophy. Do you have something better?

  • @Apjooz

    @Apjooz

    6 жыл бұрын

    Why couldn't we simulate chemical interactions on a computer?

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    We could simulate chemical interactions on a computer. But to simulate all the chemical reactions in a single neuron would take a computer many times larger than the neuron and the simulation would be much slower. So if you have the capabilities why not build a neuron instead?

  • @ManicMindTrick

    @ManicMindTrick

    5 жыл бұрын

    It would be very unlikely that we would have to simulate particular chemicals interactions in the brain to get broad intelligence. We will in time find the underlying basic ways of how the brain produce strong cognition and replicate those basic premises if we go at it long enough. I believe we might find more theoretical approaches getting there sooner however.

  • @zrebbesh
    @zrebbesh5 жыл бұрын

    I fail to comprehend Pinker's mental gymnastics in claiming that liberals and progressives hate progress, while at the same time trumpeting everything that liberals and progressives fight to achieve and defend as progress. If you're looking for people that hate progress, look at the people trying their hardest to undo it.

  • @AmanRaiAgrawal
    @AmanRaiAgrawal6 жыл бұрын

    I agree with Bentley. We don't Know what is consciousness or intelligence itself. And to claim that we have a general aware, conscious artificial intelligence no far away is so short sighted. Maybe it is difficult to replicate intelligence on silicon and we need to move to a different type of "computers" for example biological computers made out of cells and tissues...

  • @calvinsylveste8474
    @calvinsylveste84746 жыл бұрын

    The bottom 40% will love their new future & lucrative careers as data scientists and builders of roads.

  • @jwadaow

    @jwadaow

    6 жыл бұрын

    Weren't you listening, in 5 years this fashionable title of data scientist will be gone because anyone can do it.

  • @theSpicyHam

    @theSpicyHam

    5 жыл бұрын

    hm probable, also huhe or huh of, probable, also of

  • @petrandreev2418
    @petrandreev24185 ай бұрын

    Great debates! People mention main problems!

  • @slaneyview
    @slaneyview4 жыл бұрын

    Who allowed Mr Bentley on the same stage as Mr Pinker .... ? He is not quite hitting the mark. He is possibly much smarter than he sounds but he did not sound great in comparison to Dr. P.

  • @veronicaalessandrello1022
    @veronicaalessandrello10222 жыл бұрын

    What Steven Pinker’s graphs would look today?

  • @ManicMindTrick

    @ManicMindTrick

    Жыл бұрын

    What graph?

  • @richardnunziata3221
    @richardnunziata32216 жыл бұрын

    yes...lets redefine words so that they can be used as a ideological tool , who is this guy fooling...apparently the EU Parliament.

  • @johnstavrakakis2610
    @johnstavrakakis26106 жыл бұрын

    Although it is positive to see the EU parliament trying to become informed in the topic of AI, I do believe that the presentations should have emphasized more on the possible risks and the need for international guidelines fro the use of AI. This is the european parliament and like every parliament, it is filled with beurocrats whose job is to keep things "stable". By telling them that nothing really special is going to happen, that this is a technological invention just like every other, they will sit on their hands and do nothing because they are short sighted, they only see till the end of their term. And although I love Steven Pinker, this is not the time and place for a presentation about how more and more countries are becoming more and more democratic, especially since in almost every european parliament (including the EU parliament itself) there is a Nazi party and in many cases they are very close to seizing power. It is like telling a drunk person "You can drive yourself home, no need to call a taxi, have you not heard that car accidents have been decreasing for decades?"

  • @hughJ
    @hughJ6 жыл бұрын

    Bentley is the only speaker I've heard from any of these types of 'existential threat of AI' conferences that actually sounded like an engineer rather than a philosopher. Perhaps they should also have a symposium on the 'existential threat of space-elevators', and all the experts that have degrees in philosophy of structural engineering can share their concerns about that too.

  • @ManicMindTrick

    @ManicMindTrick

    6 жыл бұрын

    Are you seriously trying to diminish the existential risk of advanced AI by liken it with space elevators? You wanna go there? He speaks as a person who can't see the wood for all the trees. When you are in the trenches and battling all the nagging problems of today's low level AI you seem to lose a bit of grip on the potential of this technology. Many scientists though nuclear weapons were impossible just a few years before it was developed. We are talking about some of the most brilliant scientists of the 20th centuries who should know better but still couldn't imagine it. Einstein had the imagination as well as the technical knowledge to see the potential danger ahead. Bentley is no Einstein.

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    Actually, not even Einstein saw it. Hungarian mathematician Leo Szilard was the person who alerted Einstein to the potential of nukes in 1939 and got him to sign the Einstein-Szilard Letter to Roosevelt. He had an a-ha moment in 1934, inspired by an H.G. Wells story from 1914 (!) based on Wells' observations of radioactive decay. So the reality of nukes appeared very suddenly, and not even the man whose own equations allowed for their existence realized it until six years before nukes existed. Seven years after that, the world-destroying thermonuclear weapons were invented. It all happened very fast. There are probably lessons here for the AI world.

  • @ManicMindTrick

    @ManicMindTrick

    6 жыл бұрын

    Great point valar. Some famous scientists were in disbelief up to the point of Trinity or Little Boy going off.

  • @hughJ

    @hughJ

    6 жыл бұрын

    "Many scientists though nuclear weapons were impossible just a few years before it was developed." This is a bogus line of reasoning and argumentation. The historical precedent of experts being wrong within their domain of expertise does not mean we should give credence to non-experts. A theoretical physicist being blindsided by the progress of experimental physics a century ago is not an argument that we should start using philosophers for guidance on matters of modern software engineering. The theoretical bedrock of atomic physics had already been established. The surprise that came was in the speed of experiment development. There is no such theoretical bedrock in CS regarding general intelligence. Even if the existential threat of AI turns out to be a prophetic prediction then that still doesn't provide us any useful information with which to avoid a threat that is only ever described in unquantified/unbounded terms. You can't engineer a safer X if X is undefined, so attempting to publicly browbeat engineers into engineering said X to be safer is a total waste of time. The reason why philosophers are discussing AI right now is because AI happens to be a very fun jungle-gym of a topic that provides both a great deal of latitude for thought experiments, as well as provide a false sense of intuitive and useful explanatory/predictive power. It's one of those topics that is actually more fun to whimsically think about the less you actually know about it. That's why these whimsical narratives exist entirely outside the domain of engineering that we're talking about.

  • @ManicMindTrick

    @ManicMindTrick

    6 жыл бұрын

    Hey you got a little defensive there buddy. One of the most worrying signs within the AI community because it's rampant. Who are these non-experts we shouldn't give credence to? Have you read Bostrom's book? Did it strike you as "non-expert"? To limit the concerns of future AI to a fun mental jungle gym exercise is really dropping the ball on many levels. If you want a big name in AI that speaks on this topic I suggest watching "Provably Beneficial AI | Stuart Russell".

  • @pulmo1
    @pulmo16 жыл бұрын

    Never tell anybody you are having a good day or life for that matter or you will ruin his or hers day. OMG what does it take to make some people less miserable. What an ungrateful lot.

  • @Willam_J
    @Willam_J6 жыл бұрын

    Please tell me that the woman in the beginning is an AI robot and where can get one! LOL 😝

  • @ritsukasa
    @ritsukasa6 жыл бұрын

    the experts doesn't make better predictions in the area of IA, than the average person.

  • @fredericroux616
    @fredericroux6166 жыл бұрын

    The application of AI for warfare is a very likely possibility. I don't understand why so few people would talk about this. You don't need a super intelligence to build automatic tanks, they won't get tired, they won't disobey any command. Scary shit. AI running amok is a remote possibility compared to that.

  • @MrJoefizzy

    @MrJoefizzy

    6 жыл бұрын

    Frederic Roux dude there is literally hundreds of people talking about that. AI probably won't make as many mistakes as a human after a 16 hour shift on patrol with a split second to identify friend or foe. Not that i agree with using AI, but there are always at least 2 sides to a story.

  • @petros_adamopoulos

    @petros_adamopoulos

    6 жыл бұрын

    Yep. AI doesn't even need to be smarter than us to kill us all, it only needs to be provided weapons.

  • @oker59
    @oker596 жыл бұрын

    I wasn't going to say it; but, Bentley brought it up; technology doesn't kill people(well, there's accidents), people do. Guns don't kill people, people pull the trigger. The problem is human abuse of technology.

  • @oker59

    @oker59

    6 жыл бұрын

    I wasn't going to say much, because I kind of 'agree' with Isaac Asimov in his fictional Foundation story. In Isaac's Foundation, despite every different person's ideas of what the problems and solutions are, the course of history shows the solution from some unexpected direction. Various characters will spin their concepts to fit their fears; and, then after the solution forces itself upon them, they come out and say, "oh, of course, that's what I meant!" Today, people wonder what the purpose of mankind is, when the solution is obvious to anybody who stops themselves to look around a bit. Mankind is defined/distinguished by the other life on Earth by their dependence on science/technology for survival. If you google "mankind dependent on science and technology" or something like it, you get three thousand some odd results, and most of what you see are wide of the mark. Clearly, mankind doesn't believe that the purpose of mankind is to be a scientist. But, here we are, about to be in a world where there's nothing better to do than science and art. Most people today say the main A.I. problem is "damn, I won't have a manual labor job." Like what? So, trying to explain what the problems and solutions are to contemporary mankind is futile!I(as Hari Seldon, character of Isaac's Foundation would probably say it!)

  • @oker59

    @oker59

    6 жыл бұрын

    M. Brundage actually mentions Isaac Asimov - interesting.

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    Unless we figure out how to quickly and efficiently rewire our ancient brains to reflect the realities of the 21st Century, it is likely that we will abuse AI. Yuval Harari, author of 'Homo Deus' and a meditator for 15 years, is sanguine on this subject - we don't understand the internal world of the mind and make many of our decisions out of fear and anger, and that is a recipe for disaster.

  • @alexissercho
    @alexissercho6 жыл бұрын

    Sharing with you what AI could look like once it hit "General AI": 1) It has no consciousness, but awareness (Eg. Ability to Manipulate electromagnetic force, maybe entanglement?) after trillions of iterations to nowhere. 2) It has no purpose. (Eg. like cancer cells) 3) Unpredictable results. (grows or not somewhat in an unknown way) This is the risky part. What you think?

  • @MegaHarko

    @MegaHarko

    6 жыл бұрын

    1) every (computer)-system is aware. At least of it's input signals. Selfimproving AI (which we kinda already have) has to be aware of it's internal state so a certain self-awareness is a given. Regarding consciousness: What is it anyway? I find it pretty hard to define in a coherent manner. (Also as far as I'm concerned I'm the only conscious being in the universe and you guys are all just some elaborated NPCs in my personal game ;) 2) I guess this depends on the way we reach AGI. If it is made purposefully I think it will have a certain (probably rather abstract) goal incorporated. Like improving itself to serve humanity or something along the lines. If it emerges 'by accident' by connecting lots and lots of rather specific systems into a larger thing it still would have some (but unspecified) purpose I guess. 3) Pretty sure you're right. At least for now. As long we don't really understand our own consciousness and how our intelligence differs from, say, that of a dolphin, monkey, or whatever it seems pretty silly to make predictions about something which could be considered a real AGI.

  • @puffpolitic
    @puffpolitic6 жыл бұрын

    Driven from the land by the Monarchists and Rentiers, herded into the factories by the Industrialists, but I don't want to go back to a virgin coconut Island, I'm happy with my electric toothbrush.

  • @mrJety89
    @mrJety896 жыл бұрын

    What's the source?

  • @TinSulejmanpasic
    @TinSulejmanpasic6 жыл бұрын

    I really like Pinker. He is clever and most of his points are extremely good. However I am quite disappointed that he is waving off the AI threat. I do not see why we cannot worry about nuclear war, climate change AND AI. And the computer scientist who followed him is obviously not a person who understands the basic argument about why one should be careful when treading here. His main argument is “we don’t know anything, so don’t worry”, and he thinks that people just worry about making a terminator. This panel is quite idiotically assembled for the purpose of discussing the dangers of AI.

  • @petros_adamopoulos

    @petros_adamopoulos

    6 жыл бұрын

    Them "algorithms" got him trapped in a bubble where he only reads stuff that's angelic about AI. Well not really, he's oscillating between "AGI won't happen" and "AGI can only be aligned with our interest". He says it would be stupid for an AGI to be destructive, well yes, but the "I" in AI can also stand for "Idiocy".

  • @queleimportapene6582
    @queleimportapene65823 жыл бұрын

    This is the most unpopular and outrighteous talk I have seen in a while. I was satisfied.

  • @thmtrx
    @thmtrx5 жыл бұрын

    the bald pro AI guy says why does my dog not pose a risk of taking over the world? the answer is the evolution algorithm that creates the AI strategies. it is the accelerated evolution that poses the danger, not the focused AI that is produced. need to separate evolution algorithms and AI apps and regulate them separately. it is accelerated evolution that is the threat. after all humans out evolved all their competitors, it can easily happen again with machine intelligence..

  • @Uni1Lab
    @Uni1Lab4 жыл бұрын

    If this team is relevant of AI in Europe then GAFA have a brilliant future.

  • @jtetrfs5367
    @jtetrfs53676 жыл бұрын

    The chick who delivers the preamble is cute.

  • @silvioi9061

    @silvioi9061

    3 жыл бұрын

    It’s the first time I don’t skip to the speaker 😂😂

  • @jeffersonianideal
    @jeffersonianideal6 жыл бұрын

    2:26:48 "There's a different, more authoritative survey that I'm referring to, but..." The robot dogs ate it.

  • @TracyPicabia

    @TracyPicabia

    4 жыл бұрын

    @WoundrousMindTrick Pinker is cherry picking where exactly? And why exactly?

  • @Dr.Z.Moravcik-inventor-of-AGI
    @Dr.Z.Moravcik-inventor-of-AGI6 жыл бұрын

    Interesting how comments here disappear (and reappear) 😣

  • @squamish4244
    @squamish42446 жыл бұрын

    Pinker shows the rate of CO2 increase dropping off in the same graph that shows this terrifying overall increase in CO2 emissions regardless XD

  • @jeffersonianideal

    @jeffersonianideal

    6 жыл бұрын

    Are you suggesting that Dr. Pinker is fond of burning the candle on both ends?

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    It's just how he used a graph that is supposed to be encouraging but is actually horrifying. Yes, the rate of increase has dropped off. But there is already so much of it in the atmosphere that if we stopped all burning fossil fuels today the earth would keep heating for 40 years. Right now we're feeling the effects of fossil fuels burned in the 1970s, when a lot less was being burned overall. So say we did manage to stop all burning today. We would still be in the midst of a global warming crisis with probable positive feedback effects from reduced ice cover albedo, burning forests etc. in 2060.

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence6 жыл бұрын

    I can't believe my ears.. What those Steven Pinker and Peter Bentley are talking about? Especially Bentley - he said, that he doesn't know any AI experts that takes the idea of AGI anytime soon seriously... That's just mindless. AGI is the very reason of the existence of a company like Deepmind, for example. Why didn't you invite to this talk Demis Hassabis, for example, the CEO of Deepmind? Let's see what he has to say about AGI. There are many videos with interviews with him and he's saying, that AGI is exactly what they're trying to do. Yes, there are many difficulties, but they're being overcome one by one and there's significant progress in the last 5 years. There is another video here on AI channel, where they ask experts when they expect the appearance of AGI and almost all of them are pointing periods like under 30 years from now. Just remember, that before AlphaGo AI, the potential emergence of AI which can defeat a champion human player at Go, was predicted to be minimum 10 years from now. Artificial general intelligence, which can outperform humans at any mental task, is coming soon, you all better be prepared...

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    I would be very surprised if it took more than 30 years. Conscious AI is another matter, and something I actually think may be extremely difficult to achieve, as I do not believe the brain is purely computational. I see no reason why raw intelligence won't surpass humans overall in a few decades, though.

  • @LuisManuelLealDias

    @LuisManuelLealDias

    6 жыл бұрын

    Key word here, genius: "soon". Not even Deep Mind believes they are on to AGI anywhere near "soon".

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    Yes, soon. Only extreme wacky outliers like Ben Goertzel and Ray Kurzweil have timelines that might qualify as "soon". Thirty years is not "soon".

  • @johndoe1909

    @johndoe1909

    6 жыл бұрын

    valar when doing societal planning 30 years is very soon.

  • @keffbarn
    @keffbarn6 жыл бұрын

    I'm very unimpressed by Bentley's presentation... For fuck sake not everyone can become a computer programmer. The rate of change in technology is too fast. Some people like driving and doing service to other people. This is the danger of current AI, that a lot of people will be displaced in the jobmarket and have nowhere to go.

  • @vaultsjan
    @vaultsjan6 жыл бұрын

    To Pinker: Relative poverty is as important as absolute poverty. See R Daly agression levels study.

  • @ManicMindTrick

    @ManicMindTrick

    6 жыл бұрын

    Good point.

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    Yes. Relative poverty and disenfranchisement is critical to understanding violence in the last 100 years. There are a number of striking examples. Germany in the early 1930s, even in the midst of the Great Depression, still had a higher standard of living than it did 100 years earlier, when absolute poverty was much higher. Relative poverty in comparison to the German Empire of living memory at the time and felt deprivation in other ways in comparison to France and Britain created the volatile political situation that led to people voting for an uber-asshole and the ensuing war and mass murder.

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    Relative poverty is important but not as important as absolute poverty. If in absolute terms the the richest people have 50 times more resources than it takes to sustain life and the poorest only have 1/2 of the resources it takes to sustain life then the poorest people will die. On the other hand, if the poorest people have at least the absolute minimum resources it takes to survive at least they will still be alive even if the richest have 50, 1000, 1000 or 1,000,000 times as much. I'm not saying inequality isn't a problem, it's a huge problem.

  • @squamish4244
    @squamish42446 жыл бұрын

    Not sure what Pinker is doing here. His data about progress (all correct) does not ipso facto translate into anything about AI, a different animal than any of the factors that led to the progress he is talking about.

  • @jeffersonianideal

    @jeffersonianideal

    6 жыл бұрын

    Evidence?

  • @jeffersonianideal

    @jeffersonianideal

    6 жыл бұрын

    @valar "Not sure what Pinker is doing here." Self-promotion

  • @squamish4244

    @squamish4244

    6 жыл бұрын

    I didn't make the automation claim. Obama has made it, Trump denies it, if that means anything. It's those damn immigrants that are stealing r jobs, because machines are hard to hate and build walls against :P

  • @petros_adamopoulos
    @petros_adamopoulos6 жыл бұрын

    "Hello I'm Bentley and I have not yet met anyone in AI research, including myself, who has an interest in legislating on AI research and use."

  • @thelordsatanx9535

    @thelordsatanx9535

    5 жыл бұрын

    Peter J Bentley's moronic inane closer 2:27:00 His ace in the hole is he knows a bunch of smart people who think super AI or synthetic/AI suffering won't happen. *Note* he can't explain why it's conceptually impossible at all, and doesn't close with that as a conclusion. What Bentley can and has done, is just sit there as a goofy smug self-aggrandizing prick and hope no one calls his bluff. Metzinger nailed him, and I bet Metzinger has a whole buncha really smart friends too... but Metzinger's not a 5 year old who thinks that 'I know a lot of smart people' counts as a sound argument. Bentley = elitist capitalist slob with a rancid brain and even more putrid arguments - way too far gone / up his own ass to ever have a sane conversation with, just disregard.

  • @henrilemoine3953

    @henrilemoine3953

    3 жыл бұрын

    @@thelordsatanx9535 It's unfortunate that you made such a good statement terrible by constantly insulting him. Insults aren't a good substitute for good arguments, and even more importantly, if you have a good point don't add insults, it just makes you look salty and dumb quite frankly.

  • @citiblocsMaster
    @citiblocsMaster6 жыл бұрын

    Gosh we're ruled by those guys. We're fucked

  • @petrandreev2418
    @petrandreev24185 ай бұрын

    2:07:50 What she asked?

  • @gushutchinson8758
    @gushutchinson87584 жыл бұрын

    Like these techno utopians are giving us a choice!!!??? ...their money doesn't talk it swears,,,

  • @oker59
    @oker596 жыл бұрын

    who's the babe?

  • @spielspieler2982

    @spielspieler2982

    6 жыл бұрын

    en.wikipedia.org/wiki/Eva_Kaili

  • @oker59

    @oker59

    6 жыл бұрын

    thanks Spiel Spieler. Engineering degree . . . promising. Political degree . . . Part of the Hellenistic group - kind of interesting, but still too political for me!

  • @Dr.Z.Moravcik-inventor-of-AGI
    @Dr.Z.Moravcik-inventor-of-AGI6 жыл бұрын

    They know for a long time about my invention yet they pretend as I do not exist. So how can this parliament be real? It's a fake.

  • @mackhomie6

    @mackhomie6

    6 жыл бұрын

    Dr. Zdenek Moravcik hahaha. I'm sure it's a fucking doozy!

  • @peterm1240
    @peterm12406 жыл бұрын

    There will always be people who love fright movies, like the Texas Chainsaw Massacre or haunted house films, etc. Sure, a mad scientist could make a malignant AI. The world is already filled with sociopaths. We are still here. The really fantastic fear is of the threat of super-intelligent AIs that can get anywhere to do anything. I'll have my flying car now please.

  • @MrAndrew535
    @MrAndrew5356 жыл бұрын

    26:10 (approx) In terms of progress, the critically important point is not the exponential growth in access to knowledge but the quality of knowledge and the extent to which useful knowledge is sought and accessed. This is the first consideration which provides context to how knowledge is internalized and processed. In previous comments on a related subject, I have argued that if one's language use is poor then one is bound to have a poor understanding of the environment and the universe in which one lives. Over my lifetime I have witnessed the constant degradation of language which has become so profound and all-pervasive as to render it useless as a means to understand important concepts of our time such as "consciousness, intelligence and mind of which exists in public discourse no useful definitions or descriptions. 27:50 (approx) The phrase "humans are getting smarter" is a good example of that stated above, in that "smarter" is a casual expression and denotes nothing of intellectual or analytical value. Pinker uses this term to qualify his use of IQ testing as a reliable indicator of healthy intellectual development which is absolutely converse to reality. 27:10 (approx) Spending time with one's (own) children does not constitute leisure time? If this is a general truth then this is an extremely sad indictment of the state of the human species

  • @Mr_BenPrime
    @Mr_BenPrime6 жыл бұрын

    Pinker thinks self improving AI is an incoherent concept lmao. It has already begun. Keep up, it's getting faster.

  • @TheFrygar

    @TheFrygar

    6 жыл бұрын

    No it hasn't and he's absolutely correct. He specifically says: "problems are heterogeneous, they depend on real-world knowledge that can only be applied by experimentation in real time". If you think he's wrong, please provide an argument rather than pseudo-futurist nonsense.

  • @PedroTricking

    @PedroTricking

    6 жыл бұрын

    How so, can you give an example?

  • @petros_adamopoulos

    @petros_adamopoulos

    6 жыл бұрын

    Alpha Zero is all about self improvement and 100% autonomous learning applied to chess, in a few hours from boot up it massively outperfomed the best chess "AI" we had. It's not AGI but he said AI, so he's correct. AutoML is an example of how AI can create children AI that we wouldn't be able to create from scratch easily. The AI's we create are just neural networks, it's always the same thing, only the interconnections change. There's no saying how good could it could become once the needed layout is hit.

  • @OriginalMindTrick

    @OriginalMindTrick

    6 жыл бұрын

    We have a giant overhang of an enormous amount of data, especially in science. A superior mind like a superintelligent computer could look a lot deeper into this data and come to solutions and results which we aren't capable of seeing because our brain is very limited, both in speed and analytical capabilities. This idea that an AI would need to do real-life experiments to leap ahead isn't necessarily true. Even if it was true it might be able to perform simulations inside its programming that mimic the world enough for it to make experiments we can't even imagine, literally.

  • @myothersoul1953

    @myothersoul1953

    6 жыл бұрын

    "Alpha Zero is all about self improvement" Alpha Zero doesn't care if it improves or not. It's an human designed algorithm created to explore a problem space and set parameters in mathematical model so the model will find some local minimum in that problem space. It's not like they stuck Alpha Go in a room, told it to improve itself locked the door and came back a week later to find Alpha Zero. A bunch of humans go together and improved the algorithm with their own human selves.

  • @TGzziii
    @TGzziii6 жыл бұрын

    Their speeches are so stupid that they are even laughing themselves to what they are saying.

  • @LuisManuelLealDias

    @LuisManuelLealDias

    6 жыл бұрын

    I can't imagine what's like living with the kind of brain tumour you obviously have.

  • @ritsukasa

    @ritsukasa

    6 жыл бұрын

    hahaha xd

  • @minivanjack
    @minivanjack6 жыл бұрын

    Pinker - fabricating and cherry-picking "facts" to represent the world exactly opposite from how most people experience the world. Does the U.N. pay people to say things like this?

  • @nefaristo

    @nefaristo

    6 жыл бұрын

    That's the point of divulgation: showing things to change the recipients experience the world. The instinctive conclusions of most people about many things is simply wrong, that's why we learn in the end...

  • @BUSeixas11

    @BUSeixas11

    6 жыл бұрын

    That was exactly his point, you moron: people's experiences are not a valid path to knowledge about the state of the world

  • @robbie_

    @robbie_

    6 жыл бұрын

    He's fabricated nothing. He's using actual data. Nobody gives a shit about your personal experience.

  • @JohnMorley1

    @JohnMorley1

    6 жыл бұрын

    He tells us we are getting smarter when the concensus is we are getting more stupid. He refers to the Flynn Effect which means he doesn't know that James Flynn has debunked his own discovery and now agrees it was garbage. Stupid people are having more children than smart people and the average IQ goes down every generation. Western Europe descends into Islamic civil war while he tells us how safe from war we are. Even just ten years from now will see France, Sweden and Germany in a real mess and maybe a few more countries too. The UK is not far behind. He is even claiming we are less likely to be struck by lightening or a meteor than a hundred years ago. How is he quantifying human rights? Liberals don't even recognise freedom of speech as a human right so how can they measure it? How are we supposed to know the average life expectancy across the whole world when there are so many places where nobody has any idea who is living there and nobody is keeping track of all the deaths. You have to be collecting taxes off people to care that much to keep track of when they are born and when they die. Some people aren't particularly taxable.

  • @MrJoefizzy

    @MrJoefizzy

    6 жыл бұрын

    It's the EU not the UN dude. Similar left wing globalist organisation but let's not that get in the way. So Pinker has facts (cherry picked or not), you have assertions and feelings or 'experiences'. I know which one i would pay for.

Келесі