May 14, 2025

Apocalypse AI: CloudWalk's AI meetup

Join the circle on May 21 at CloudWalk's HQ in São Paulo

CW's AI Team

Research and Development

Apocalypse AI: CloudWakl's AI meetup
“And behold, a pale horse: and the name of him that sat on it was Death, and Hell followed with him.” (Rev. 6:8)

As we ponder the words of Revelation, perhaps our beasts of silicon are the new four horsemen.

We invite you to dream or dread the end of times together: the Apocalypse AI.

On May 21st at 7 PM, join us at CloudWalk's office in São Paulo for an informal gathering—no lectures, no specialists—just humans gathered around, contemplating the fascinating, fearful, and fantastic possibilities of an AI-driven apocalypse. Secure your place here.

We might ask: will it be the machines that perish—starved of data, dried of compute—or will it be us, undone by mass unemployment, environmental collapse, rogue weapons systems, and social orders strained past repair? Will the AI Apocalypse mean the end of artificial intelligence or the end of human agency? Or perhaps the end of everything, as the ancient battle between good and evil is reloaded in silicon form. Is this our Final Judgment, coded not in scripture, but in source?

Below you'll find five intriguing texts to spark inspiration. Bring your own thoughts, dreams, and fears; all visions of the future are welcome:

Superintelligence: Paths, Dangers, Strategies – Nick Bostrom (book, 2014)

Seminal work by Oxford philosopher Nick Bostrom, exploring scenarios of artificial superintelligence and existential risks. Bostrom argues that AI vastly surpassing human intelligence could become uncontrollable and even "take over" the planet, eliminating humanity to pursue misaligned objectives (wired.com). The influential bestseller mainstreamed the previously fringe idea that advanced AI could turn against humanity and "delete" us (wired.com). Superintelligence consolidated concerns that AI might be humanity's "last invention," advocating for alignment research to prevent potential technological apocalypse. You can also find on YouTube multiple interviews with Nick Bostrom, such as this one.

“Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” – Eliezer Yudkowsky (March 29, 2023)

Opinion piece published in TIME, where Yudkowsky, a pioneering AI safety researcher, urgently advocates for completely halting advanced AI development. He argues that creating superhuman intelligence under current conditions would likely result in human extinction—“the most likely result of building a superhumanly smart AI… is that literally everyone on Earth will die” (time.com). Yudkowsky calls the situation a global emergency, criticizing even a moderate open letter that called only for a six-month pause. This alarmist essay has become a prominent manifesto for those fearing an “AI apocalypse,” demanding drastic measures to prevent it.

“The ‘Don’t Look Up’ Thinking That Could Doom Us With AI” – Max Tegmark (April 25, 2023)

Opinion essay by physicist Max Tegmark in TIME, comparing societal complacency toward the AI threat to the film Don’t Look Up. Tegmark cites a survey indicating half of AI researchers believe there's at least a 10% chance AI could cause human extinction (time.com). Nevertheless, he notes, the predominant societal reaction is denial or ridicule of imminent danger, rather than preventive action (time.com). Tegmark—co-founder of the Future of Life Institute—warns that ignoring scientific warnings about uncontrolled AI may “doom us,” urging the adoption of safety principles and voluntary halts in the development of superintelligent AI.

“Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk” – Interview with Yann LeCun (February 13, 2024)

Interview with Yann LeCun (February 13, 2024). In this TIME interview, Yann LeCun—Meta’s chief AI scientist and Turing Award winner—takes a skeptical stance on the AI apocalypse. LeCun considers "preposterous" the idea that superintelligent AI would inherently want to dominate or exterminate humanity (time.com). He argues intelligence does not imply ambition for power ("The first fallacy is that because a system is intelligent, it wants to take control. That’s just completely false" time.com), asserting we can program beneficial goals into AIs. LeCun contends real AI risks are exaggerated and comparable to other manageable dangers, believing there will be ample time for gradual implementation of safeguards as AI advances—thus refuting apocalyptic views

“The Supposed Existential Threat of AI to Humanity” – Oren Etzioni & Rich Chen (March 29, 2024)

Opinion article on Medium challenging catastrophic predictions and providing a skeptical voice in the debate. Authored by former Allen Institute for AI CEO Oren Etzioni and researcher Rich Chen, the essay argues that speculations about extinction from AI divert attention from more concrete and immediate challenges (such as job losses and misinformation) (medium.com). The authors dismantle two major "doomer" arguments, exposing logical flaws like the misuse of "infinite utility" to justify extreme fears. For instance, they contrast Bostrom’s apocalyptic "black ball" scenario with the idea of a "white ball," where advanced AI actually serves as salvation against other threats to humanity (medium.com). Ultimately, this manifesto argues that pursuing responsible AI advancements offers greater benefits than paralysis due to fear of an improbable apocalypse.

Full Interview: "Godfather of AI" Shares Predictions and Warnings

In this comprehensive interview, Geoffrey Hinton, often referred to as the "Godfather of AI," discusses his decision to leave Google and his concerns about the rapid advancement of artificial intelligence. Hinton delves into the potential risks associated with AI development, including the possibility of AI surpassing human intelligence and the ethical implications that come with it. He emphasizes the need for global collaboration to ensure AI technologies are developed responsibly and safely.

AI and the Future of Humanity – Yuval Noah Harari at the Frontiers Forum

In this thought-provoking lecture, historian and philosopher Yuval Noah Harari explores the profound implications of artificial intelligence on the future of humanity. Harari delves into how AI could reshape societies, economies, and our very understanding of what it means to be human. He emphasizes the urgent need for global cooperation to ensure that technological advancements align with human values and ethical considerations.

2027 AGI, China/US Super-Intelligence Race, & The Return of History

In this thought-provoking conversation, Dwarkesh Patel engages with Leopold Aschenbrenner to explore the trajectory toward Artificial General Intelligence (AGI) by 2027. They delve into the scaling laws driving AI advancements, the geopolitical dynamics between China and the U.S. in the race for superintelligence, and the historical patterns that may be reemerging in this new era of technological evolution.

Join the circle. Dream with us, or perhaps awaken to a new understanding.