Safe and Accountable

Safe and Accountable
Hosts Beth Coleman and Rahul Krishnan navigate the challenging terrain of AI safety and governance. In this episode, they are joined by University of Toronto experts Gillian Hadfield and Roger Grosse as they explore critical questions about AI’s risks, regulatory challenges and how to align the technology with human values.
Hosts
Beth Coleman is an associate professor at U of T Mississauga’s Institute of Communication, Culture, Information and Technology (www.utm.utoronto.ca/iccit/) and the Faculty of Information. She is also a research lead on AI policy and praxis at the Schwartz Reisman Institute for Technology and Society (srinstitute.utoronto.ca/). Coleman authored Reality Was Whatever Happened: Octavia Butler AI and Other Possible Worlds (k-verlag.org/books/beth-colem...) using art and generative AI.
Rahul Krishnan is an assistant professor in U of T’s department of computer science in the Faculty of Arts & Science (www.artsci.utoronto.ca/) and department of laboratory medicine and pathobiology in the Temerty Faculty of Medicine (temertymedicine.utoronto.ca/). He is a Canada CIFAR Chair at the Vector Institute, a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and a faculty member at the Temerty Centre for AI Research and Education in Medicine (T-CAIREM tcairem.utoronto.ca/).
Guests
Gillian Hadfield is a professor of law and strategic management in the Faculty of Law (www.law.utoronto.ca/) at U of T and is the inaugural Schwartz Reisman Chair in Technology and Society. She holds a CIFAR AI Chair at the Vector Institute for AI and served as a senior policy adviser to OpenAI from 2018 to 2023.
Roger Grosse is an associate professor of computer science in the Faculty of Arts & Science and a founding member of the Vector Institute (vectorinstitute.ai/). He is a faculty affiliate at the Schwartz Reisman Institute for Technology and Society and was part of the technical staff on the alignment team at Anthropic, an AI safety and research company based in San Francisco.
00:00 Intro
02:37 Mitigate catastophic outcomes
04:38 What is an influence function?
05:09 How do the models work?
06:41 "I'll do whatever you ask, just please don't shut me down"
09:48 Is super-intelligence inevitable?
11:49 Regulation of AI
15:02 Registry for AI companies
21:47 Conclusions

Пікірлер