The Era of Global Risk panel

The Era of Global Risk panel
(21 March 2023, CSER Panel, University of Cambridge)
This event launched an ambitious new book - The Era of Global Risk as part of the Cambridge Festival 2023. This volume curates 11 specially commissioned essays that give a comprehensive and accessible overview of the emerging science of existential risk studies and is edited by SJ Beard, Clarissa Rios Rojas, and Catherine Richards of the Centre for the Study of Existential Risk (CSER), with Professor Lord Martin Rees.
The launch event was a panel discussion involving three of the book's contributing authors, with SJ Beard acting as chair and Martin Rees providing introductory remarks. The panelists are: Lara Mani, whose chapter explores the risk from asteroids and volcanoes and who is CSER’s lead researcher on communications and public engagement; Kayla Lucero-Matteucci, a PhD student and CSER Research Affiliate who has written about the military applications of AI and nuclear winter and John Burden, a Research Associate at CSER who has written about tracing the development of our understanding of Existential Risk from AI from its initial speculative origins to its current form as a fully-fledged science.
This panel discussion provided an engaging, insightful and hopeful discussion of the multiple overlapping challenges facing humanity in the 21st Century. While acknowledging that we are currently living through an era of global risk, it also shows that, with foresight and courage, humanity has the power to bring that era to a close and move on to a future of existential hope.
Speakers
SJ Beard (Chair)
Martin Rees
Lara Mani
Kayla Lucero-Matteucci
John Burden
The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse. For more information, please visit our website:
www.cser.ac.uk
/ csercambridge
/ csercambridge

Пікірлер: 4

  • @MarneeMadsen
    @MarneeMadsen Жыл бұрын

    I love the interdisciplinary approach... It is so necessary. The silos that developed in scientific academia and research I think have contributed to us facing near term extinction. So many climate scientists for example are atmospheric experts or physicists and do not understand habitat and ecosystems... And have utterly underestimated or outright ignored the fragility and interdependence of habitats. Anthropocentrism is also killing us all... Humans and all living things. Thank you all for your work.

  • @tedhoward2606
    @tedhoward2606 Жыл бұрын

    Some really good stuff in this presentation; yet to me it fails to recognise the greatest class of risk: the human tendency to over simplify the irreducibly complex. One of the greatest risks in this class comes from over simplifying our understanding of evolution, and only seeing the competitive aspects, and failing to recognise that at every new level of complexity, it is cooperation that allows for the emergence and survival of that level of complexity. And cooperation in this sense is deeply complex, as it demands effective strategies for cheat detection and mitigation if it is to survive long term (which demands recursive exploration of strategy space - all levels). The deeply evolved tendencies to simplify start at subconscious levels, so our perceptions are simplified before they reach our subconscious model of reality, then they inform the subconscious model of reality that is what consciously perceive as "reality". So we are all, necessarily, doubly isolated from whatever "reality" actually, "objectively", is. The more awareness we bring to that, the greater our ability to mitigate the risks encapsulated in that. I am clear, beyond any shadow of remaining reasonable doubt, that the only survivable classes of strategy demand fundamental cooperation and respect for diversity, and that has to be non-naïve cooperation, that acknowledges the possibility of cheating at any and all levels, and has active strategies to detect and mitigate such cheating, and that necessarily involves search across novel strategic territories (all classes, all domains, eternally). So deeply dimensional uncertainties present, necessarily; demanding the highest levels of personal integrity. To me, it is clear that multiple classes of risk demand that we build a very large technological infrastructure off planet, and that only really makes sense if the vast bulk of the mass involved comes from the moon, via linear accelerators. And the risks in the power of such technology demand global political and cultural cooperation, and that demands fundamental reform of economic and legal systems, and that involves very complex risk mitigations against the dangers of centralised power. So nothing even remotely simple in that, and there is a deep confidence in that whatever the depths of complexity present cooperation has to be fundamental to the survival of every instance, level and class of agent present in the milieu. And, of course, AI and AGI are present in that milieu. Any AGI that fails to embody all the levels of strategic risk hinted at in this post is necessarily self terminating (and probably takes us with it). So - lots to do. And it is urgent, and immediate. And there is cause for cautious optimism, even as there is also great urgency at every level for individual responsibility and individual action.

Келесі