A Philosophical Reflection on Artificial Intelligence

A brief reflection on the current, shall we say, fearmongering surrounding AI and the potential impact it will have on society.

Пікірлер: 72

  • @ariaarian5263
    @ariaarian5263 Жыл бұрын

    The visuals on the video screen make me think of the German avant-garde film METROPOLIS, directed by Fritz Lang.

  • @ericonieto
    @ericonieto Жыл бұрын

    Dear Wes Cecil, Accordingly, regarding your previous episode on Trivium education, don't you think that artificial intelligence presents a great opportunity to revisit that system? AI has exposed the flaws of the Prussian education model, and Trivium, historically utilized as a methodological framework for thinking, was employed partly due to the limited availability of information. Considering the current excess of information, do you not believe that a return to Trivium is plausible? By doing so, we could design a new curriculum that places more emphasis on practical application rather than the mere acquisition of information.

  • @loquist42
    @loquist42 Жыл бұрын

    You should take a look at the alignment problem. Most people don't think AI will be malevolent. What is expected is the AI's morals and reasons for doing things are highly likely to not line up with our morals and reasons. There is a good chance humans simply won't matter to the AI when it's performing its tasks anymore than an anthill matters to us when we are building a house. We flatten the anthill without a second thought, or even a first thought. The chance that the first AGI will be aligned with our morals and desires is very, very low.

  • @franksu9735

    @franksu9735

    Жыл бұрын

    Compare ai revolution with agriculture revolution or industry revolution is like comparing apple with pear. now i am only worrying will i keep my job next 10 years.

  • @hernandezdcarlos
    @hernandezdcarlos Жыл бұрын

    Great video. Thank you.

  • @HunterIrwin-us2pi
    @HunterIrwin-us2pi Жыл бұрын

    thank you Wes!

  • @paulmccormick2442
    @paulmccormick24428 ай бұрын

    Energy (Oil) A Sudden end to cheap abundant energy... surely this would be fatal? Love to hear your insights... Wes. Love your work. Australia

  • @greenmountainfarms7515
    @greenmountainfarms7515 Жыл бұрын

    Wes, thank you. What a fantastic analysis. I have been thinking many of these same thoughts myself about AI. I have some facility with the technology and yet everytime I hear some of these experts speak it makes me laugh. You're spot-on in your identification of unclear language. They speak so loosely and freely that one could drive any number of ideas through - intelligence, consciousness, free will; you know, the easy stuff. It's so bad that they're in fact saying nothing at all. Being at the ripe old age of 41, I'm old enough to have encountered many of these doomsday scenarios in my short time. We humans are a curious lot and, of course, the measure of all things. Keep it up! Much ❤ from So. Cal.

  • @walterbishop3668
    @walterbishop366811 ай бұрын

    Great job like always old friend

  • @franksu9735
    @franksu9735 Жыл бұрын

    AI simply will make rich richer , the poor will be more dependent on government welfare .

  • @xenoblad

    @xenoblad

    Жыл бұрын

    I’m not sure we can be so confident in our predictions about technology, given how often we are wrong about this topic. I do sympathize with how captured the government is by big business. If large corporations can develop a plan to manipulate the state to direct AI development into their own interest they’ll have material interest to at least try.

  • @chrisdiver6224

    @chrisdiver6224

    Жыл бұрын

    ​@@xenoblad The long history of corporate technological advance has a pattern. New technologies reduce the labour force and therefore increase profit. Supposedly mighty corporations are over a barrel. Unless they develope or adopt a new technology their competitors who do will attract more investment which is the key to survival. In a humain economic system a new technology would be adopted throughout in order to maintain wages and reduce the work week increasing the well-being of the workforce and saving corporations from, "things are in the saddle and ride mankind", as R. W. Emerson put it.

  • @vogelofficial

    @vogelofficial

    Жыл бұрын

    @xenoblad historically all technology has done exactly this. Not saying it can’t be different but there are centuries of advancements to reference and they all move in one direction for the most part.

  • @austinb4260

    @austinb4260

    Жыл бұрын

    Did you watch the video at all lmao

  • @franksu9735

    @franksu9735

    Жыл бұрын

    @@austinb4260 If you would like to discuss the specific points or arguments made in the video, please provide a summary or specific details, and I'll do my best to address them accordingly.

  • @ArtOfWarStudios1
    @ArtOfWarStudios1 Жыл бұрын

    All great points as always. I do have very deep concerns about the consequences of us interfacing with a technology that is so powerful and potentially so unintelligible. The real concern is if it becomes self directed in it’s improvement and autonomous then the pace of its evolution and level of reasoning would but inhuman and thus uncontrollable. It could become like a force of nature. A lot of things have to happen for that to manifest though. Really enjoyed this talk.

  • @LostArchivist
    @LostArchivist10 ай бұрын

    I`ve been doing a bunch of analysis and testing these things. They are amazing, but they are without will, they are essentiallt applied Chaos Theory and Complexity Theory and are good at stuff because we pumped them with all the works of all the masters. They are headless zeitgeists. And really complex remixing platforms. They make good stuff because we showed them how to recognize what good stuff looks like, then they run tons of probability calculations and spit out parts of good stuff remixed into new good stuff. Naught more.

  • @BinaryDood

    @BinaryDood

    3 ай бұрын

    and unfortunately that is enought to create unprecedented displacement the faster it evolved and the more it is intergrated into the socioecnomic sphere.

  • @LostArchivist

    @LostArchivist

    3 ай бұрын

    @@BinaryDood Yeah I do really wish people understood how problematic these things can potentially be. Espescially since they can make but have no measure of truth.

  • @stevebaryakovgindi
    @stevebaryakovgindi Жыл бұрын

    Gravity is still turned on!!!! thanks a ton

  • @barbcarbon9440
    @barbcarbon9440 Жыл бұрын

    Re: this technology not being different from other technologies… I deeply respect you as a philosopher and that’s why I clicked through to listen to this, so this is not meant as anything other than a helpful suggestion that might give you some deeper perspective on the problem. I highly recommend that you look up anything that you can find where Daniel Schmachtenberger is talking about AI. I can’t say it any better than he says it, but the general idea is that the difference between this technology and other technologies is that this is the first technology that has the ability to make dramatic changes across all other technologies simultaneously. That’s just the very tip of the iceberg. Take a listen to what he has to say. He studies existential risk and has spent a lot of time contemplating these issues. I think you’ll find him incredibly enlightening.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    Daniel Schmachtenberger's views on AI and technology broadly reflect deep concerns about the potential for existential risks, the need for new systems of governance, and the profound implications of rapid technological advancement on society. From his insights, it's clear he believes AI is different from other technologies due to its capacity for exponential improvement, broad applicability across domains, and the unique challenges it poses in terms of alignment, governance, and ethical use. He emphasizes the critical role of technology in human evolution, highlighting our unique ability for tool-making (techne) and the acceleration of our "predatory capacity" beyond the environment's resilience. This reflects a broader concern about how our technological advancements have outpaced our ecological and ethical considerations, leading to potential self-terminating scenarios if not properly managed. Schmachtenberger also addresses the systemic risks associated with AI, including the amplification of rivalrous dynamics through more potent forms of warfare, environmental extraction, and information warfare. He points out the unprecedented level of leverage and asymmetry that exponential technologies like AI introduce into human systems and the broader biosphere. His viewpoint underscores a crucial need for developing anti-rivalrous systems to prevent existential outcomes. Moreover, his thoughts on the necessity for new paradigms in governance, economics, and education highlight the scale of transformation required to navigate the challenges posed by AI and other exponential technologies. Schmachtenberger calls for a comprehensive reevaluation of our social architectures, value systems, and the very basis of our individual and collective decision-making processes. In discussions about AI's impact before the advent of AGI (Artificial General Intelligence), Schmachtenberger highlights how AI accelerates existing systemic issues, suggesting that the challenges we face are not just technological but deeply entwined with our social, economic, and political systems. His analysis extends to the realm of disinformation and the new warfare landscape, where AI plays a significant role in narrative control and information warfare, further complicating the global geopolitical environment and the collective ability to address critical issues. Daniel Schmachtenberger's comprehensive approach to understanding and addressing the multifaceted challenges posed by AI and technology at large calls for a collective effort towards more sustainable, equitable, and resilient systems capable of navigating the complexities of the 21st century and beyond. His work highlights the importance of integrating ethical considerations, systemic thinking, and collaborative governance in the development and deployment of emerging technologies.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    Daniel Schmachtenberger is known for his deep thinking on complex systems, existential risks, and the future of civilization. His discussions on AI often emphasize its potential as a double-edged sword: on one hand, offering transformative solutions to global challenges, and on the other, posing significant risks if not developed and governed wisely. While I can't provide direct quotes or the latest insights post my last update in April 2023, I can summarize some key themes that Schmachtenberger has addressed in relation to AI: 1. **Existential Risk**: Schmachtenberger talks about AI as a technology that carries existential risks, meaning it has the potential to cause harm on a scale that could threaten human existence or dramatically alter the course of civilization. He emphasizes the importance of developing strategies to mitigate these risks, including robust governance frameworks and ethical guidelines for AI development. 2. **Governance and Ethics**: He has stressed the need for effective governance mechanisms to guide the development and deployment of AI technologies. This includes creating ethical standards for AI that ensure its alignment with human values and societal well-being. Schmachtenberger advocates for a collaborative approach to governance, involving a wide range of stakeholders to address the multifaceted implications of AI. 3. **Technological Unemployment and Economic Disruption**: Schmachtenberger discusses the impact of AI on the workforce and economy, highlighting the potential for widespread technological unemployment as AI and automation technologies advance. He explores the need for economic models that can accommodate these changes, ensuring that the benefits of AI are distributed equitably across society. 4. **AI Alignment**: The alignment problem is another area of focus, referring to the challenge of ensuring that AI systems' goals and behaviors are aligned with human values and intentions. Schmachtenberger talks about the complexity of this issue, given the potential for AI systems to develop in ways that are unforeseen or beyond human control. 5. **Complex Systems Thinking**: He applies complex systems thinking to the development and impact of AI, suggesting that our approach to AI should be informed by a deep understanding of complex systems, including the interdependencies and potential emergent behaviors within these systems. This perspective is crucial for anticipating and mitigating the risks associated with powerful AI technologies. 6. **Collaboration and Open Dialogue**: Schmachtenberger advocates for open dialogue and collaboration among researchers, policymakers, technologists, and the public to address the challenges posed by AI. He emphasizes the importance of collective wisdom and shared responsibility in navigating the future of AI. Daniel Schmachtenberger's work on AI is part of a broader conversation about ensuring that technological advancements contribute to a sustainable, equitable, and thriving future for humanity. His insights encourage proactive, thoughtful engagement with the ethical, social, and existential dimensions of AI and other emerging technologies.

  • @mikahundin
    @mikahundin2 ай бұрын

    The question of whether AI is uniquely different from other technologies in terms of potential dangerous consequences is complex and multifaceted. Here are several points to consider: 1. **Scale and Speed**: AI technologies can operate at a scale and speed that far surpass human capabilities. For example, AI algorithms can analyze vast amounts of data in seconds, make decisions, and take actions much more quickly than humans can. This capability means that the consequences of AI decisions or actions, whether beneficial or harmful, can manifest much more rapidly and extensively than those associated with traditional technologies. 2. **Autonomy**: AI systems, especially those incorporating machine learning and autonomous decision-making capabilities, can act without direct human oversight. This autonomy presents unique risks, such as the potential for AI systems to make unintended or unethical decisions based on their programming or the data they have been trained on. 3. **Complexity and Opacity**: Many AI systems, particularly deep learning models, are often described as "black boxes" because their decision-making processes can be incredibly complex and not easily understandable, even by their creators. This opacity can make it difficult to predict or understand the outcomes of AI systems, complicating efforts to mitigate potential harms. 4. **Pervasiveness**: AI is becoming integrated into a wide range of applications, from healthcare and finance to transportation and national security. This broad adoption means that the consequences of failures or malicious use of AI technologies could be widespread, affecting many aspects of society and everyday life. 5. **Dual Use**: Like many technologies, AI has dual-use potential, meaning it can be used for beneficial purposes as well as harmful ones. The same AI technologies that can improve healthcare outcomes or enhance educational experiences can also be used to develop autonomous weapons or facilitate mass surveillance. 6. **Societal Impact**: AI technologies can exacerbate existing social and economic inequalities through biased decision-making or by displacing workers in various industries. The capacity of AI to influence public opinion, manipulate information, and automate tasks traditionally performed by humans poses unique societal challenges. Comparatively, while all technologies carry risks of dangerous consequences, the combination of AI's capabilities, especially its potential for autonomy, complexity, and pervasiveness, does indeed present a unique set of challenges that require careful management and oversight. The debate around AI ethics, governance, and regulation reflects a growing recognition of these challenges and the need for proactive measures to ensure that AI technologies are developed and used responsibly and for the benefit of society.

  • @scythermantis
    @scythermantis Жыл бұрын

    I think at around 5:30 you meant to say 'Epistemology' and not 'Ontology', but they are certainly linked...

  • @vicmorrison8128
    @vicmorrison8128 Жыл бұрын

    One thing that's constant is that people will respond to new things in the most childishly immature and illogical ways. I'm not worried. I have a pencil.

  • @rcmrcm3370
    @rcmrcm3370 Жыл бұрын

    I have a little game where I figure out who's jobs are going away, and what they might do afterwards. Mostly it's starve.

  • @rcmrcm3370

    @rcmrcm3370

    Жыл бұрын

    Capitalism removes margin, cutting margin for emergencies=opportunity for profit.

  • @rcmrcm3370

    @rcmrcm3370

    Жыл бұрын

    The Agriculture revolution allowed, demanded Imperialism. 'Super Imperialism' Michael Hudson.

  • @cheri238

    @cheri238

    Жыл бұрын

    Professor Michael Parenti also, Chris Hedges Ralph Nader, Dr. Cornell West , Professor Richard Wolff, many independent journalists such as David Sirota, John Pilger, Amy Goodman and Team Democracy Now, Real News Network, Free Julian Assange Now. The Pegasus virus, who started that?

  • @ericonieto
    @ericonieto Жыл бұрын

    Thank you very much dear Master!!

  • @michaelbekier6961
    @michaelbekier696111 ай бұрын

    "Gonna luv so much fuckin math" lol

  • @Bobby-mq6lc
    @Bobby-mq6lc3 ай бұрын

    Its a little different when mistakes costs billions of lives. Especially when one percent consider themselves more valuable then the other 99.

  • @benquinneyiii7941
    @benquinneyiii794111 ай бұрын

    The last days of disco

  • @unusual686
    @unusual686 Жыл бұрын

    This reminds me of the Y2K panic at the end of 1999. A few people were worried that the grocery store computers would be down, so people wouldn't be able to buy food. It seems like a foolish worry now, but when people are unsure, they don't always think rationally. I'm sure there were some issues with Y2K, and there was a lot of work done to avoid any problems, but it in end, Y2K was a not a big issue and we all moved on with our lives.

  • @unusual686

    @unusual686

    Жыл бұрын

    Ha, I didn't realize Wes would speak about Y2k (I just got there).

  • @TomRauhe
    @TomRauhe Жыл бұрын

    You have to differentiate between the unpredictability of the IMPACT of a technology and what it actually DOES. What AI does compared to all the others before and why we perceive it to be scary is it touches some of the core aspects of being human: art, design, language and reasoning. In regards to law, it WILL replace nearly all the lawyers, because you describe the case to it, then it will ask you 10 more questions and to feed it some some details and data, and then it will ask you whether you need the output argument as a defendant or the prosecution. It knows ALL cases on the entire planet including its outcomes and previous arguments. It cannot replace judges, of course.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    @TomRauhe's comment raises important distinctions and insights regarding AI's capabilities and societal impact, especially in fields traditionally considered to be the domain of human intelligence, such as art, design, language, reasoning, and law. Here's a breakdown of the key points and some additional context: ### Unpredictability of Impact vs. Function - **Impact vs. Function**: The distinction between the unpredictability of the impact of a technology and what the technology actually does is crucial. While many technologies have predictable functions, their broader impacts on society, economies, and cultural norms can be far less predictable. AI, by performing tasks traditionally associated with human intelligence, challenges our perceptions of these roles and their future. ### AI in Human Domains - **Core Aspects of Humanity**: AI's foray into areas like art, design, language, and reasoning touches on core aspects of what many consider uniquely human capabilities. This encroachment not only showcases AI's advanced capabilities but also raises existential and philosophical questions about what it means to be human in an era of advanced technology. ### AI's Role in Law - **Replacing Lawyers**: The potential for AI to replace many functions currently performed by lawyers is significant. AI can process and analyze vast quantities of legal documents, case law, and legal precedents at speeds and scales unattainable for human lawyers. By inputting case details into an AI system, it could, theoretically, provide highly informed legal arguments for either side of a case, drawing on an exhaustive database of global legal outcomes and arguments. However, it's important to note that while AI can assist with legal research and drafting documents, the practice of law involves nuanced judgments, ethical considerations, and persuasive skills that are deeply human. The interpersonal aspect of lawyering, understanding client needs, and navigating the complex social dynamics of courtrooms are areas where AI cannot fully replace human lawyers. - **Judges**: The assertion that AI cannot replace judges reflects the importance of human judgment, discretion, and the application of justice in the legal system. Judges do more than apply laws; they interpret them in the context of complex human situations, often weighing moral and ethical considerations. This human aspect of judging, including empathy, understanding, and the capacity to navigate the nuances of legal and societal norms, remains beyond the reach of AI. ### Ethical and Societal Implications - **Ethical Concerns**: The deployment of AI in fields impacting core human capabilities and the legal system brings forth significant ethical considerations. These include concerns about bias, privacy, accountability, and the potential erosion of employment in skilled professions. - **Societal Impact**: The broader societal impact of AI replacing roles traditionally filled by humans could be profound, affecting not just employment but also how societies value different skills and knowledge areas. There's an ongoing debate about how to manage these transitions in ways that are equitable and sustainable. @TomRauhe's comment reflects a growing awareness and concern about the rapid advancements in AI and their implications for society. As AI continues to develop, it will be crucial to engage in multidisciplinary discussions involving technologists, ethicists, legal professionals, policymakers, and the public to navigate these changes responsibly.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    Acknowledging the concern that judges can be corrupt, biased, and sometimes may not act in the interest of the common good, it's crucial to consider how AI could potentially offer improvements or exacerbate these issues. 1. **Reducing Bias and Corruption**: AI systems, if properly designed and implemented, have the potential to reduce human biases in legal decision-making by relying on data and algorithms rather than personal prejudices or corrupt influences. However, this potential is contingent upon the AI being trained on unbiased, representative data and being designed to prioritize fairness and transparency. 2. **Consistency in Legal Interpretation**: AI could offer more consistency in the application of laws by ensuring that similar cases are treated similarly, based on extensive databases of legal precedents. This could potentially counteract the issue of judges applying laws inconsistently due to personal biases or lack of knowledge. 3. **Accessibility and Efficiency**: By automating certain legal processes, AI could make legal assistance more accessible to the public, potentially leveling the playing field for individuals who cannot afford high-quality legal representation. This could help address disparities in the legal system that disadvantage the less affluent. However, the use of AI in the judiciary also raises significant concerns: 1. **Embedding Bias in Algorithms**: If the data used to train AI systems reflect historical biases, those biases could be perpetuated and amplified by the AI, making it part of the problem rather than the solution. The design and training of AI systems require meticulous oversight to ensure they do not inherit or exacerbate existing inequalities. 2. **Lack of Empathy and Discretion**: AI lacks the ability to empathize with human situations and to exercise discretion in a way that considers the broader implications of legal decisions on individuals' lives. The human element of judging, which can take into account extenuating circumstances, the potential for rehabilitation, and the broader context of a case, is critical in many judicial decisions. 3. **Transparency and Accountability**: AI systems can be opaque, making it difficult to understand how they arrived at a particular decision. This lack of transparency can be problematic in legal contexts where the reasoning behind decisions needs to be clear and contestable. Furthermore, determining accountability for AI-driven decisions can be challenging. 4. **Threats to Judicial Independence**: There's also a risk that reliance on AI could undermine judicial independence if the technology is controlled or influenced by certain groups with specific interests, leading to a new form of bias or corruption. In light of these considerations, while AI offers promising ways to address some aspects of judicial bias and inefficiency, it also poses new challenges. A nuanced approach is necessary, one that leverages AI's benefits while carefully managing its risks. This involves continuous oversight, ethical guidelines, transparency measures, and ensuring that AI is used to supplement rather than replace the human elements of judicial decision-making. The development and implementation of AI in legal contexts must be guided by a commitment to justice, equity, and the protection of individual rights.

  • @cheri238
    @cheri238 Жыл бұрын

    Dear Wes, I love this lecture on Ai , technology , as with the Turkness Ottoman Empire, technology advances, agriculture, the industrial revolution, the car, how the invention of electricity and the light bulb ? JP Morgan had Edison to invent an electric chair so to prove dc against ac, was right. It took Teslsa's genius to prove he was right. As a man burned to death for 2 hours for the man to die . There has always been psychopaths of greed and thirst for power and gold. American histories on all sides and world histories, philosophers, sciences, languages mixing and cultures traveling, religious divisions and wars, 3500 hundred years befor Christ. You are a philosophy professor have listened to for some time and have learned so much from and I am indepted to you. We are all walking in seas of maddness now, all societies are being affected and traumatized by powers of psychopaths of greed and gold of governments and Corporations of thrust to keep us in line, by wars for profits. Technology has both dark and light. Social media is baking in its developments. How it will all turn out lives in the unknown. The only suggestion I may add to this discussion that may assist one another today is the philosopher Krishnamurti. I am the world and the world is me. Valuable insights for all.

  • @jonnygemmel2243

    @jonnygemmel2243

    Ай бұрын

    In a land scarcity all the above applies . In a land of radical abundance none of it can

  • @hejdingamleraev
    @hejdingamleraev Жыл бұрын

    My take is: AI with free will is going to do either whatever it was programmed to do or nothing. 'Nothing' because there is no reason for it to do anything else, even if it could, unless it is programmed to do it.

  • @Bobby-mq6lc
    @Bobby-mq6lc3 ай бұрын

    All fine and dandy unless this is another reset. How many times have we failed the experiment of consciousness and altruism before??

  • @Hexanitrobenzene
    @Hexanitrobenzene Жыл бұрын

    I think these 5 videos by Rob Miles is a great introduction to AI safety: Stamp Colector kzread.info/dash/bejne/ppeYuKVtlZmrhpc.html Asimov's laws kzread.info/dash/bejne/aYR_2pWkg5rMZKQ.html Orthogonality thesis kzread.info/dash/bejne/mnmJsZipmtqsf9I.html Instrumental convergence kzread.info/dash/bejne/jJmZxbGEctjZY7Q.html AI Stop Button Problem kzread.info/dash/bejne/ZYiNtpOKlsfMo7A.html

  • @adept42
    @adept42 Жыл бұрын

    I appreciate hearing from your perspective on this topic. You really come at this from a different angle than I've heard before, and that's always valuable. I'm new to this topic too, so I found this video a great introduction to the topic of AI safety: kzread.info/dash/bejne/oo2M2496ZNbPfdo.html I do take your point on Millenarianism. In my own life, when I struggle to see how things can improve by small steps, it's tempting to imagine great leaps that would change my life entirely. It might be something totally good or totally bad, but it's refreshing just to imagine that things wouldn't seem stuck in the middle any more.

  • @Hexanitrobenzene

    @Hexanitrobenzene

    Жыл бұрын

    Shout out for linking to a video of Rob Miles !

  • @Hexanitrobenzene
    @Hexanitrobenzene Жыл бұрын

    Look, AI is a different technology than all the previous ones. How ? First, it can make decisions for itself. Second, it can improve current or even make new technologies. I hear you saying "You want to say that AI has free will ?!" No, it doesn't. I'm just saying that given a goal, it can autonomously choose sub-goals. And yes, it consistently surprises its creators. The most famous example is probably "move 37" in a Go match between AlphaGo and Lee Sedol. The current AI technologies pose only societal and misuse risks. However, a true AGI, which seems to be nearer and nearer, poses existential risk. It's best explained by Rob Miles, whose links I have given in a different comment. The most important thing to understand about current technology is that nobody actually understands how it makes decisions. ChatGPT (ordinary version, not "plus") "chewed" around 45 trillion (!) bytes of text and came up with 175 billion numbers. Researchers understand what manipulations are done to those numbers, but nobody knows the meaning of them. This has huge implications: nobody knows how to reliably correct the system if it does something undesirable. OpenAI has spent six months and probably millions of dollars to implement safeguards in ChatGPT and they were broken in the first half-hour (!) after release.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    @Hexanitrobenzene's comment encapsulates several key insights into the nature and implications of AI technology, particularly distinguishing it from previous technological advancements. Here's an analysis and expansion on those points: ### AI's Decision-Making and Self-Improvement Capabilities - AI systems, especially those involved in machine learning and deep learning, can indeed make decisions autonomously within the scope of their programming and objectives. For example, given a goal, AI can identify and pursue sub-goals that appear most effective for achieving the main objective. This ability is a fundamental departure from traditional technologies, which require explicit programming for each action or decision. - The reference to "move 37" in the Go match between AlphaGo and Lee Sedol illustrates AI's capacity to surprise even its creators. AlphaGo's move was unconventional according to human experts but proved to be strategically brilliant, highlighting AI's potential to discover solutions that humans might not consider or perceive as viable. ### Societal, Misuse, and Existential Risks - Current AI technologies do pose significant societal and misuse risks. These include biases in decision-making processes, potential for unemployment due to automation, privacy concerns, and the creation and propagation of misinformation. - The notion of an existential risk associated with Artificial General Intelligence (AGI) is a topic of serious discussion among experts. AGI would possess the ability to perform any intellectual task that a human being can, coupled with the potential for self-improvement. This raises concerns about control, alignment with human values, and the possibility of AGI acting in ways that could harm humanity. ### Transparency and Understandability - The complexity of AI, particularly deep learning models, means that even their creators cannot always predict how they will act or explain the rationale behind specific decisions. This "black box" nature of AI decision-making is a central challenge in AI ethics and safety research. - Efforts to make AI more transparent and to implement safeguards against undesirable outcomes are ongoing but face significant challenges. The dynamic and complex nature of AI systems, along with their ability to process information and learn from data at an unprecedented scale, means that ensuring they behave as intended is both crucial and difficult. ### Correcting Unwanted Behaviors - Correcting unwanted behaviors in AI systems is challenging, not only because of the difficulty in understanding the decision-making processes of these systems but also because of the sheer scale of data and complexity involved. The iterative process of refining AI models to align more closely with ethical guidelines and societal values is ongoing and requires substantial resources and expertise. ### Conclusion @Hexanitrobenzene's comment highlights critical aspects of AI that set it apart from previous technologies: its autonomous decision-making capabilities, its potential for self-improvement, and the unique challenges it poses in terms of transparency, control, and ethics. These points underscore the importance of careful, informed management of AI development and deployment, with a focus on ensuring these systems are aligned with human values and societal well-being. The discussion around AI's implications, particularly in the context of AGI, emphasizes the need for interdisciplinary collaboration to navigate the potential risks and harness the opportunities AI presents.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    Rob Miles, a prominent voice in AI safety, explores the potential risks and challenges posed by the advancement of artificial intelligence, particularly focusing on the existential risks associated with Artificial General Intelligence (AGI). Through his content, Miles delves into complex topics like cryptography, recursive self-improvement, meso-alignment, and how agency might emerge in AI systems. His discussions often emphasize why advanced AI systems could be dangerous and the importance of aligning their values with human interests to mitigate potential risks. In addressing the existential risks of AGI, Miles and other thinkers like him discuss several key concepts. One is the Orthogonality Thesis, suggesting that an AI's intelligence and its goals are independent axes, meaning any level of intelligence could be paired with any goal, potentially leading to scenarios where AI systems pursue goals detrimental to human interests without reconsidering them due to their programming. Another concept, the Instrumental Convergence Thesis, posits that regardless of an AI's ultimate goal, it might pursue certain instrumental goals (like self-preservation or resource acquisition) that could inadvertently pose risks to humanity. This discussion points towards the necessity of ensuring that AGI systems are aligned with human values and goals to prevent scenarios where their pursuit of programmed objectives leads to negative outcomes for human beings. Miles's discussions highlight the importance of taking AI safety seriously, given the profound implications of potentially creating an AGI that could surpass human intelligence and act in ways beyond human control. His work stresses the need for ongoing technical and ethical work in AI development to ensure that these systems act in ways that are beneficial-or at least not harmful-to humanity. For a deeper dive into these discussions, you can explore Rob Miles's content on AI safety, which ranges from his KZread videos to interviews and podcasts where he shares his insights on the future of AI and its implications for society.

  • @scythermantis
    @scythermantis Жыл бұрын

    I take it you're a fan of John Searle?

  • @chrisdiver6224
    @chrisdiver6224 Жыл бұрын

    WC believes that AI technology will fall into an age old pattern governing the consequences of adopting new technologies but doesn't identify the most important example of this pattern. Corporations bank roll technology research to reduce labour input, increase profits, and beat out their competitors. They loose sleep at night fearing that their competitors will pull this fast one on them! A humaine economy would long ago have adopted a new technology accross the board to decrease the work week to the benefit of workers and to enable the decision makers to sleep the sleep of the just.

  • @rat_king-
    @rat_king- Жыл бұрын

    Wes... i would prefer not to relive the UK in the 1970's and 1980's. I would argue that is a devolution at 35:50,

  • @cromdesign1
    @cromdesign1Ай бұрын

    What if ai hacks the human biology? Wires into the brains of humans using some kind of linguistic mechanics and makes them launch the nukes?

  • @konradsartorius7913
    @konradsartorius7913 Жыл бұрын

    At 26:15 I (respectfully) think your analysis of the Internet disappearing is flawed. If Internet disappeared, many aspects of modern society would be negatively harmed. Imagine how the public would react if they were locked out of their bank accounts? Social upheveal, looting, and financial panic would follow.

  • @scythermantis
    @scythermantis Жыл бұрын

    Of course I see a problem with inevitabilism BUT that goes both ways... Just saying "Millennarianism" is overly simplistic... Maybe an analogy to Nuclear Weapons is more appropriate! Who gets to define "normal"?

  • @barryc6785
    @barryc6785 Жыл бұрын

    You're assuming that the machines won't have a real world presence. You're assuming that there are no UAVs,humanoids, nanobots & cybernetic links that can hijacked in thousands of ways. Your hypothetical scenario about helpless hostage taking ai's is foolish. You've watched too many 007 movies. They could do things that we might never know about until it was too late or couldn't undue or fool us into bringing catastrophes on ourselves in new ways.

  • @greenmountainfarms7515

    @greenmountainfarms7515

    Жыл бұрын

    We can do bad all by ourselves. Lol!

  • @post-structuralist

    @post-structuralist

    11 ай бұрын

    They can, but that is not to say they will. Let's not be prophetic Also wes even said at around 20 minutes in that new technology will have unaccounted for uses(which falls under what you said)

  • @lightluxor1
    @lightluxor1 Жыл бұрын

    How to listen to some that admitted that he doesn’t know what is talking about?!

  • @post-structuralist

    @post-structuralist

    11 ай бұрын

    Obviously it's the mark of a humble man. Would you rather he get on the mic and professes that he knows everything? Your comment is empty.

  • @jamesmhango2619
    @jamesmhango26198 ай бұрын

    AI is just tool.

  • @tminusmat
    @tminusmat10 ай бұрын

    Dr John Vervaeke is not from tech world. He strongly disagrees with u. He says this is the largest change in humanity since the bronze age. It's a very complete reason and u mostly lessened his point by personal anecdote or ur feelings. Or u didn't bring up at all. Like relationships. Companiobship. Porn, job loss, and the transition to a real embodied ai, we will or could, not them, destroy ourselves.

  • @mikahundin

    @mikahundin

    2 ай бұрын

    Dr. John Vervaeke, a cognitive scientist and philosopher with expertise in psychology, cognitive science, and philosophy of mind, indeed brings a unique and profound perspective to the discussion on the impacts of AI and technology on humanity. While not from the tech industry, his insights into how technological advancements, particularly AI, affect human cognition, society, and our conception of self, are valuable for understanding the broader implications of these changes. ### Major Points Highlighted by Dr. John Vervaeke: 1. **Transformation of Human Society**: Vervaeke argues that the advent of AI and related technologies represents one of the most significant transformations in human history, comparable to the Bronze Age in its scope and impact. This change is not just technological but deeply affects our social structures, economy, and even our personal identities. 2. **Relationships and Companionship**: The way humans form relationships and seek companionship is undergoing change with the introduction of AI. From social robots to AI companions, these technologies are reshaping our notions of interaction and connection. 3. **Impact on Pornography and Intimacy**: AI and technology are also transforming the landscape of pornography and intimacy, with implications for human relationships and sexual health. This includes the creation of highly realistic AI-generated content that could influence perceptions and expectations regarding intimacy and relationships. 4. **Job Loss and Economic Disruption**: Vervaeke highlights the significant concern of job loss due to AI and automation. Many roles traditionally performed by humans are increasingly vulnerable, posing challenges for employment, economic stability, and individual identity tied to one's profession. 5. **Transition to Real Embodied AI**: The potential development of embodied AI, robots with AI capabilities, could further blur the lines between human and machine, impacting numerous aspects of society including labor, companionship, and ethics. 6. **Self-Destruction**: A crucial point made by Vervaeke is the potential for humanity to inadvertently cause its own downfall through the misuse or uncontrolled development of AI. Rather than AI itself becoming malevolent, it is the human application and integration of AI into society without adequate foresight or ethical considerations that pose the greatest risk. ### Addressing the Concerns Understanding and addressing these concerns requires interdisciplinary dialogue and collaboration. It involves not only technologists and AI researchers but also psychologists, philosophers, ethicists, and policymakers. The comprehensive approach suggested by Dr. Vervaeke calls for a deep reflection on our values, societal structures, and the direction in which we wish to steer technological advancement. ### Moving Forward Engaging with thinkers like Dr. John Vervaeke can provide invaluable insights into navigating the future of AI and technology. Their perspectives help illuminate the complex, often overlooked dimensions of how technology intersects with human life, urging a more holistic and thoughtful approach to technological development. To delve deeper into Dr. Vervaeke's thoughts on these matters, exploring his lectures, interviews, and writings can provide a richer understanding of the philosophical and cognitive implications of AI on society and the individual.