How Not To Destroy the World With AI - Stuart Russell

Ғылым және технология

Stuart Russell, Professor of Computer Science, UC Berkeley
About Talk:
It is reasonable to expect that artificial intelligence (AI) capabilities will eventually exceed those of humans across a range of real-world decision-making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. Russell will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.
About Speaker:
Stuart Russell, OBE, is a professor of computer science at the University of California, Berkeley, and an honorary fellow of Wadham College at the University of Oxford. He is a leading researcher in artificial intelligence and the author, with Peter Norvig, of “Artificial Intelligence: A Modern Approach,” the standard text in the field. He has been active in arms control for nuclear and autonomous weapons. His latest book, “Human Compatible,” addresses the long-term impact of AI on humanity.
About the Series:
The CITRIS Research Exchange and Berkeley Artificial Intelligence Research Lab (BAIR) present a distinguished speaker series exploring the recent breakthroughs of AI, its broader societal implications and its future potential. Each seminar takes place on Wednesdays from noon to 1:00 p.m. in the Banatao Auditorium at Sutardja Dai Hall on the UC Berkeley campus and will be livestreamed on KZread. All talks are free and open to the public.
Support CITRIS as we develop technology solutions for challenges around the world: wildfires, the health of an aging population, the future of a workforce augmented by artificial intelligence, and more. In all we do, we prioritize diversity, equity, and inclusion across each of our research initiatives.
give.berkeley.edu/fund/FH5885000

Пікірлер: 30

  • @waakdfms2576
    @waakdfms2576 Жыл бұрын

    This was fantastic. I'm so grateful Stuart is part of this conversation -- we really him to stay actively involved to help us find our way through his informed, sensible, well grounded, intelligent, and knowledgeable approach. I'm so glad he signed the "open letter" petition. Thank you for posting this lecture.

  • @MegawattKS
    @MegawattKS Жыл бұрын

    This is one of the deepest yet well-rounded discussions I have run across while trying to understand the technology and where it's leading us. Thanks for both the technical content and the multiple levels at which you look at this problem - and for offering a hopeful path forward.

  • @erikals
    @erikals Жыл бұрын

    56:00 summary and questions was perhaps the best part

  • @tellitasitis
    @tellitasitis Жыл бұрын

    The elephant in the room is control of AI. Which Country, Company or individual will have the most powerful. Will they compete with each other and to what purpose. Their will be no holding back of AI. In fact their is a race on right now to develope it faster then any competitor, be it country, company or person.

  • @JoshuaBarretto
    @JoshuaBarretto Жыл бұрын

    I think this is the single most insightful and inspiring talk I've seen on the subject. Really gets to the heart of the problem we face and has 'sparks of a solution' to the cliff-like problem we're rapidly driving toward.

  • @Slaci-vl2io
    @Slaci-vl2io Жыл бұрын

    I liked this conference very much. Host more of these and the world will be a much better place.

  • @timothyclemson
    @timothyclemson Жыл бұрын

    Many thanks. More power to Stuart!

  • @nowithinkyouknowyourewrong8675
    @nowithinkyouknowyourewrong8675 Жыл бұрын

    starts ay 5:00

  • @chillingFriend
    @chillingFriend Жыл бұрын

    This was brilliant, that's for the upload!

  • @IngvildCasanas-fr2wd
    @IngvildCasanas-fr2wd Жыл бұрын

    Great insight on the responsible development of AI! 😊👍 We need more people like Stuart Russell! #grateful #AIethics

  • @thomasfreund-programandoha961
    @thomasfreund-programandoha961 Жыл бұрын

    Thank you for this amazing talk!

  • @claireoneill3955
    @claireoneill3955 Жыл бұрын

    So glad I’m majoring in data science

  • @gregniemeyer5616
    @gregniemeyer5616 Жыл бұрын

    Common and indexical goals need not be exclusive. Everyone at the coffee shop has an indexical goal of getting coffee, and yet, a common goal can emerge by which people wait in line to their coffee when it is their turn. Of course, we could all storm the shop but it would at best work once, and then the whole infrastructure would be destroyed. Like we can layer goals in different time frames with different degrees of precision, we can also layer indexical and common goals. Can machines do the same?

  • @RealRavi
    @RealRavi Жыл бұрын

    This guy is a legend

  • @boringmanager9559
    @boringmanager9559 Жыл бұрын

    This man just gave me hope 🙂

  • @yifucharleschen
    @yifucharleschen Жыл бұрын

    Very informative talk :)

  • @christat5336
    @christat5336 Жыл бұрын

    Great

  • @sagesingh
    @sagesingh Жыл бұрын

    Question so does this mean that the 60 minutes episodes with Google saying their AI is scary good and they can't even explain why it's so smart (leaning a new language other than english) ...does all that have no weight and is all hype to self promote thier platform?

  • @Lovin_It
    @Lovin_It Жыл бұрын

    12:59 I thought it was predicted to take 10-20 years, if 100 is true it must have been earlier, I rather doubt that high number within the last 10 years. Anyone know?

  • @stupidas9466
    @stupidas9466 Жыл бұрын

    Having a moratorium on AI research for a limited time is needed but…doesn't that just mean that potential "good actors" will just be behind potential "bad actors" for the exact amount of time the moratorium lasts? I'm sure "evil geniuses" and/or corrupt regimes won't halt a thing and i don't see any way around it.

  • @TheFartoholic

    @TheFartoholic

    Жыл бұрын

    Bad actors mostly ride on the progress of better-resourced good actors

  • @ralphhebgen7067

    @ralphhebgen7067

    Жыл бұрын

    Actually it is turning out that centralist regimes like China are banning LLMs for the precise reason that they cannot control them. It‘s almost like a natural hedge. For the moment at least.

  • @nickwilliams8302

    @nickwilliams8302

    Жыл бұрын

    And this is of course one of the problems. As we approach being able to build general AI, it's likely to turn into a race. You don't even really need "bad actors" if you have systemic pressures that encourage people to ignore safety concerns in order to "win".

  • @boringmanager9559

    @boringmanager9559

    Жыл бұрын

    Somehow this argument doesn't work for atomic research, what's the difference here? Hiring hundreds-thousands of engineers, attracting hundreds of mil investments, then buying tons of equipment, to do what is forbidden on the international level would not be something I can pull from my garage. Yeah, North Korea and Iran still develop nuclear systems, yet it's classified so harder to attract talents and there're legal grounds for punishments for those who help them. I think this argument makes things much worse rather than better. Saying "if we don't blow up the world, someone else is gonna do it, so it's gotta be us" is insanity and yet eric schmidt and other suits like him (purely business people with little knowledge about technology and even less respect towards it) keep repeating this 💩

  • @kyneticist

    @kyneticist

    Жыл бұрын

    The good actors are leagues ahead. Also, as time goes on the 'good' is less apparent as they increasingly pursue their own interests. There are far too many people with stars in their eyes. Human nature drives people (generally) to pursue and exercise their own best interests ahead of others. The 'problem of alignment' needs to be as much about those people as AI itself.

  • @RichardWatson1
    @RichardWatson1 Жыл бұрын

    Great talk. If the AI remains unsure of what our preferences are, surely the shortest path is to simply influence our preferences until it can predict them perfectly? I’m not superhuman so I assume the AI will come up with a better plan than this.

Келесі