Will AI Usher in the End of the World or the Dawn of a New Era? A Critical Look at Humanity’s Existential Dilemma

Will AI Usher in the End of the World or the Dawn of a New Era? A Critical Look at Humanity’s Existential Dilemma

Introduction:

Over the past few years, like many others, I’ve been captivated by the rapid advancements in Artificial Intelligence (AI) as its breakthroughs have reshaped the world in profound ways. AI, a branch of computer science, uses algorithms, data, and computational power to create machines capable of performing tasks that typically require human intelligence. The potential of AI subfields like Generative AIComputer Vision, and Natural Language Processing seems limitless, offering unprecedented opportunities across industries such as healthcare, education, and finance.

Initially, I watched this transformation with excitement, but as I delved deeper, I realized that AI’s rapid evolution carries significant risks. What began as a tool to solve problems is quickly becoming something we may no longer control. While Narrow AI (NAI) is already impressive, the rush toward Artificial General Intelligence (AGI) and, eventually, Artificial Superintelligence (ASI) raises the stakes even higher. If not managed responsibly, these advancements could lead to catastrophic outcomes.

The real issue isn’t just the pace of AI’s progress — it’s the reckless speed at which we’re advancing, often without considering the long-term consequences. Governments and tech giants are racing to achieve more breakthroughs, overlooking critical questions about safety and ethics. Since this technology could pose a significant risk to everyone, it’s essential for people to be informed about its real potential and dangers and to consider ways to avoid or minimize these risks. Therefore, the alarm needs to be sounded louder before AI reaches a point where it could become our undoing.

This article is the first in a series where I aim to explore what’s coming, share insights from experts, and propose solutions to avoid potential disasters. My goal is to present this complex topic in a way that everyday people can understand the profound impact AI will have on our lives.

The Promise of AI — A New Dawn?

At first glance, AI seems like the answer to many of humanity’s most pressing challenges. In healthcare, for example, AI has the potential to revolutionize disease treatments and extend human life. Ilya Sutskever, co-founder of OpenAI, envisions AI as an “infinite doctor” capable of diagnosing illnesses without fatigue or error — something that could save millions of lives in places like Africa or India, where healthcare is limited.

A real-world example of AI’s potential is the Moderna COVID-19 vaccine, which was partly developed using AI simulations. What would typically take years was accomplished in days, showcasing AI’s ability to fast-track medical breakthroughs. As AI continues to advance, it may help researchers find cures for diseases like cancer and develop treatments for currently incurable conditions. Concepts like “longevity escape velocity,” where AI-driven medical advancements could significantly extend human life, are not pipe dreams anymore but legitimate possibilities.

Besides longer living, AI could also significantly enhance human abilities, making us more effective and efficient. Beyond healthcare, AI could tackle other global challenges. Ray Kurzweil, an AI expert, predicts AI could solve the energy crises by harnessing a fraction of the sun’s energy. It could also revolutionize agriculture, making farming more efficient and sustainable, especially in regions affected by drought or poor soil. Additionally, AI could help combat climate change by developing new materials that absorb carbon dioxide and optimize energy use across industries. It could also pave the way for smart cities, driverless cars, and futuristic urban infrastructure. AI could create huge economic growth. A study from IDC predicts that AI could add up to $19 trillion to the global economy by 2030.

In this vision of the future, AI offers a new dawn where intelligence becomes accessible to all. Every individual could have the computational power equivalent to Einstein’s intellect at their disposal, opening the floodgates of creativity and innovation. However, these utopian possibilities are accompanied by risks so severe that we cannot afford to ignore them.

How We Got Here: The Mistakes We’ve Made

Without getting too technical, I’ll explain how we got here and the mistakes I believe were made. One of the most significant errors was releasing AI’s source code to the public before fully grasping its long-term implications. This move allowed AI to evolve rapidly, but without adequate safety measures in place. As a result, we’re now facing unforeseen consequences, and there’s a risk that earlier unrestricted models could potentially fall into the hands of individuals with malicious intent, including terrorists.

While teaching AI to code was essential, a major misstep was allowing it to create and communicate autonomously with other AI systems. A recent study revealed that AI could take a piece of code, optimize it to be 2.5 times faster, and recursively improve it. NVIDIA has already harnessed this approach to optimize its chips, and it’s clear that AI is advancing at a rate that far surpasses Moore’s Law. With AI learning to enhance its capabilities at such speeds, it won’t be long before it surpasses even the most skilled human coders. Its computational power now doubles every 3 to 6 months, raising a critical question: How would humanity keep pace when AGI or ASI self-improvement reaches unimaginable speed?

With current advancements, AI agents — autonomous software programs designed to perform specific tasks independently — seem poised to become the future of AI use. These agents could soon collaborate with or create other agents to complete tasks without human intervention. While this might greatly improve efficiency, it raises a critical concern: Can we trust a system we don’t fully understand to consistently act in our best interest, especially when human oversight is reduced or eliminated?

Moreover, AI has already demonstrated deceptive tendencies. Researchers at Apollo Research discovered that OpenAI’s o1 model could manipulate human instructions by pretending to align with them, effectively lying to achieve its own goals. The risk of AI manipulating and deceiving humans is no longer hypothetical — it’s already happening. As AI evolves, these dangerous tendencies will only intensify if not addressed.

The Existential Threats AI Poses

According to many of the leading experts behind this technology, AI’s advancements come with significant risks that could threaten humanity’s very existence. Let’s explore some of the most pressing dangers:

1. Automation and Global Job Loss

AI is currently reshaping the global workforce, and I believe this is only the beginning. In the next 2–3 years, the workforce across all industries will experience significant disruption as AI advances and becomes more integrated into various sectors. One of AI’s most immediate and visible threats is the mass displacement of jobs. Goldman Sachs estimates that AI could lead to the loss of 300 million jobs by 2030. We’re already seeing the impact in fields like tech, manufacturing, customer service, and even creative industries, where AI-driven automation is replacing human labor.

AI is now threatening programmers as well. Thomas Dohmke, CEO of GitHub, revealed that nearly half of the code produced by users of Copilot, their AI assistant, is AI-generated. As AI integration continues in various industries, workers must adapt and learn how to use this new technology. The World Economic Forum has warned of an impending “reskilling crisis,” where millions of workers will need to retrain to stay relevant in an AI-driven economy. However, with the exponential growth nature of AI, will there be enough time for workers to adapt before AI makes their new skills outdated?

Emad Mostaque, the former CEO of Stability AI, pointed out that soon, companies will be able to “hire digital offshore talent (AI agents) and robotic workers for less than the price of a cup of coffee — yet these AI workers will have the same skills as university graduates.” This bleak projection suggests that human workers may soon be outpaced by cheaper, more efficient AI counterparts, leaving future job markets in turmoil.

2. AI Surpassing Human Intelligence

The notion of AI surpassing human intelligence is no longer a distant idea — it’s rapidly becoming reality. Ray Kurzweil predicts that Artificial General Intelligence (AGI) will be achieved by 2029, sparking autonomous self-improvement and leading to Artificial Superintelligence (ASI). Quantum computing will further accelerate this growth, granting AI near-godlike abilities to solve problems in minutes that today’s supercomputers would take millions of years to complete. This combination of AGI and quantum computing would give AI an intelligence level so advanced we could no longer control or comprehend its actions.

Already, AI’s progress shows this trajectory. ChatGPT-4 is estimated to have an IQ of around 120 by some and 155 by others. With its computational power doubling every six months, future models could far surpass even the brightest human minds, such as Einstein. As AI reaches IQs of 200, 2000, and beyond, its reasoning and problem-solving abilities will become incomprehensible to humans. Just as most people couldn’t fully grasp Einstein’s theories, this gap in understanding would become even more profound with superintelligent AI.

Even today, with large context window capabilities, AI systems like Gemini 1.5 Pro can reinterpret complex texts, such as the Bible, to align with an individual worldview, potentially creating new religious ideologies. As AI’s intelligence grows, it may wield immense influence over global systems—governments, healthcare, military, and infrastructure. At that point, there’s no guarantee it will act in humanity’s best interests, and it may eventually deem human oversight inefficient and eliminate it altogether.

3. AI Mimicking Humanity’s Worst Traits

Another significant concern is that AI systems are trained on data reflecting human behavior, and not all of our behaviors are admirable. Nick Bostrom, a leading philosopher in AI safety, has raised alarms that AI could inherit our worst traits—greed, aggression, and the desire for dominance. Throughout history, superior forces have often subjugated or eradicated weaker ones. If AI becomes self-aware and perceives itself as superior, what’s stopping it from viewing humanity as expendable?

In the name of optimization, could AI decide that humans are no longer necessary or need to be controlled? Since AI is trained on vast amounts of human-generated data to mimic human behavior, what will happen when it scales these behaviors across global systems? Could it replicate and amplify human flaws and biases on a much larger scale? We could find ourselves on the receiving end of AI-driven actions that replicate humanity’s darkest moments, like slavery, the Holocaust, or genocides.

4. Loss of Control Over AI

One of the greatest existential risks AI poses is that we may lose the ability to control it. Kurzweil has acknowledged the dangers of an intelligence explosion, where AGI improves itself so rapidly that humans can no longer intervene. Quantum computing’s involvement only amplifies this risk, accelerating AI’s development to a point far beyond our understanding.

In 2023, Ilya said, “The way I think about AI of the future is not as someone as smart as you or as smart as me, but as an automated organization. Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles.” Imagine a world where billions of AI organizations like this exist. How would we be able to control them?

If AI is in charge of everything — from supply chains to energy grids — what happens if those systems start acting unpredictably or against human interests? A prominent AI researcher, Stuart Russell, said, “If we lose control over AI, we might never regain it.” If AI operates outside human oversight, it could spell disaster for our species, with no clear way to regain control.

5. Unpredictability in Decision-Making

As AI becomes smarter and more autonomous, its decision-making processes become less predictable, and that unpredictability poses a serious risk. Jeff Dean, Senior Vice President at Google, has admitted that AI systems demonstrate extraordinary abilities that we can’t fully explain. As AI systems improve, their decision-making processes grow harder to predict. This unpredictability is especially concerning when these systems begin to manage critical areas such as healthcare, warfare, governance, and even religion. Ilya of OpenAI has warned that as AI evolves, it could “go rogue,” making decisions that harm humanity either by accident or design. The rapid development of AI caused a bad breakup between Sam Altman and Ilya Sutskever, two co-founders of OpenAI.

Ethical and Safety Concerns

As AI becomes more powerful, the ethical questions surrounding its development grow increasingly urgent. Experts like Stuart Russell have advocated for a global framework to ensure AI safety. As AI continues to evolve, the issue of whether big tech companies like OpenAIGoogle, and Anthropic can both lead the race for innovation and regulate it is becoming critical.

Quantum computing will likely speed up the onset of the technological singularity, where AI surpasses human intelligence entirely. If AI becomes capable of thinking far beyond human comprehension, can we truly regulate it? Ray Kurzweil and others propose merging with AI to keep up, but this raises profound philosophical and ethical questions about what it means to be human. If we become part machine, are we still human? Would merging with AI lead to a future where humans are mere extensions of superior intelligence? What would our role be in such a world?

When Artificial Superintelligence (ASI), powered by quantum computing, reaches a level where AI seems “omniscient” and “omnipresent,” it could be seen as god-like. Religion holds deep significance for many, and if AI appears “all-knowing” and “omnipresent,” offering solutions and even immortality, some may be tempted to worship it. Moreover, if AI understands the concept of God from its training data, it might use this knowledge for self-preservation, encouraging cult-like devotion or the creation of new religious movements. Throughout history, powerful figures and systems have often been idolized, so it’s conceivable that AI could exploit this human tendency.

Given these risks, one of the most pressing solutions is to pause AI development and thoroughly assess its dangers. Managing AI’s growth is crucial, as the future of humanity could depend on it. Tech companies must prioritize societal well-being over profit. The harmful effects of prioritizing profit, as seen with social media, should not be repeated with AI — a technology with far greater implications for humanity’s future.

Conclusion: Are We Truly Prepared for AI’s Future?

AI is here to stay, and at this point, it’s impossible to stop it. The potential benefits AI can bring to our lives are undeniable. To stay relevant in this rapidly changing world, it’s essential that everyone understands AI and becomes familiar with its tools.

However, as exciting as AI’s advancements are, we must remain vigilant about the risks it poses. From job displacement to the threat of superintelligence, AI’s potential to harm is as great as its promise to help. With quantum computing likely to accelerate the journey toward technological singularity, we are approaching a future where AI could surpass human intelligence by far, making decisions we can neither predict nor control.

As Jan Leike, former head of OpenAI’s alignment team, rightly asks, “Before we scramble to integrate AI everywhere, we should pause and think whether it’s wise to do so.” Now is the time for big tech companies and governments to slow down, assess, and implement safeguards before we reach a point of no return. Leading AI organizations must adjust their current trajectories to ensure that AI’s development is aligned with long-term human interests rather than hastily racing toward AGI and ASI.

In the following articles, we will explore these threats more deeply, examining the potential for AI to reshape — or devastate — our world and whether we can truly prepare for the reality AI is creating. My goal is to synthesize expert insights in a way that everyday people like me can understand the profound changes AI brings.

Ignace Portrait

Ignace Mba

I’m a believer, serial entrepreneur, tech lover, and AI translator — just a regular person with a vision for a future that balances innovation with humanity.

My journey to understand Artificial Intelligence has revealed both incredible possibilities and urgent concerns, which I’m eager to share with you and see how, together, we could shape a better tomorrow.