Reading view

There are new articles available, click to refresh the page.

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

Wall clock in office desk with big sunset sun light effect.

If uncontrolled artificial general intelligence—or “God-like” AI—is looming on the horizon, we are now about halfway there. Every day, the clock ticks closer to a potential doomsday scenario.

That’s why I introduced the AI Safety Clock last month. My goal is simple: I want to make clear that the dangers of uncontrolled AGI are real and present. The Clock’s current reading—29 minutes to midnight—is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks. While no catastrophic harm has happened yet, the breakneck speed of AI development and the complexities of regulation mean that all stakeholders must stay alert and engaged.

[time-brightcove not-tgx=”true”]

This is not alarmism; it’s based on hard data. The AI Safety Clock tracks three essential factors: the growing sophistication of AI technologies, their increasing autonomy, and their integration with physical systems. 

We are seeing remarkable strides across these three factors. The biggest are happening in machine learning and neural networks, with AI now outperforming humans in specific areas like image and speech recognition, mastering complex games like Go, and even passing tests such as business school exams and Amazon coding interviews.

Read More: Nobody Knows How to Safety-Test AI

Despite these advances, most AI systems today still depend on human direction, as noted by the Stanford Institute for Human-Centered Artificial Intelligence. They are built to perform narrowly defined tasks, guided by the data and instructions we provide.

That said, some AI systems are already showing signs of limited independence. Autonomous vehicles make real-time decisions about navigation and safety, while recommendation algorithms on platforms like YouTube and Amazon suggest content and products without human intervention. But we’re not at the point of full autonomy—there are still major hurdles, from ensuring safety and ethical oversight to dealing with the unpredictability of AI systems in unstructured environments.

At this moment, AI remains largely under human control. It hasn’t yet fully integrated into the critical systems that keep our world running—energy grids, financial markets, or military weapons—in a way that allows it to operate autonomously. But make no mistake, we are heading in that direction. AI-driven technologies are already making gains, particularly in the military with systems like autonomous drones, and in civilian sectors, where AI helps optimize energy consumption and assists with financial trading.

Once AI gets access to more critical infrastructures, the risks multiply. Imagine AI deciding to cut off a city’s power supply, manipulate financial markets, or deploy military weapons—all without any, or limited, human oversight. It’s a future we cannot afford to let materialize.

But it’s not just the doomsday scenarios we should fear. The darker side of AI’s capabilities is already making itself known. AI-powered misinformation campaigns are distorting public discourse and destabilizing democracies. A notorious example is the 2016 U.S. presidential election, during which Russia’s Internet Research Agency used automated bots on social media platforms to spread divisive and misleading content.

Deepfakes are also becoming a serious problem. In 2022, we saw a chilling example when a deepfake video of Ukrainian President Volodymyr Zelensky emerged, falsely portraying him calling for surrender during the Russian invasion. The aim was clear: to erode morale and sow confusion. These threats are not theoretical—they are happening right now, and if we don’t act, they will only become more sophisticated and harder to stop.

While AI advances at lightning speed, regulation has lagged behind. That is especially true in the U.S., where efforts to implement AI safety laws have been fragmented at best. Regulation has often been left to the states, leading to a patchwork of laws with varying effectiveness. There’s no cohesive national framework to govern AI development and deployment. California Governor Gavin Newsom’s recent decision to veto an AI safety bill, fearing it would hinder innovation and push tech companies elsewhere, only highlights how far behind policy is.

Read More: Regulating AI Is Easier Than You Think

We need a coordinated, global approach to AI regulation—an international body to monitor AGI development, similar to the International Atomic Energy Agency for nuclear technology. AI, much like nuclear power, is a borderless technology. If even one country develops AGI without the proper safeguards, the consequences could ripple across the world. We cannot let gaps in regulation expose the entire planet to catastrophic risks. This is where international cooperation becomes crucial. Without global agreements that set clear boundaries and ensure the safe development of AI, we risk an arms race toward disaster.

At the same time, we can’t turn a blind eye to the responsibilities of companies like Google, Microsoft, and OpenAI—firms at the forefront of AI development. Increasingly, there are concerns that the race for dominance in AI, driven by intense competition and commercial pressures, could overshadow the long-term risks. OpenAI has recently made headlines by shifting toward a for-profit structure.

Artificial intelligence pioneer Geoffrey Hinton’s warning about the race between Google and Microsoft was clear: “I don’t think they should scale this up more until they have understood whether they can control it.”

Part of the solution lies in building fail-safes into AI systems—“kill-switches,” or backdoors that would allow humans to intervene if an AI system starts behaving unpredictably. California’s AI safety law included provisions for this kind of safeguard. Such mechanisms need to be built into AI from the start, not added in as an afterthought.

There’s no denying the risks are real. We are on the brink of sharing our planet with machines that could match or even surpass human intelligence—whether that happens in one year or ten. But we are not helpless. The opportunity to guide AI development in the right direction is still very much within our grasp. We can secure a future where AI is a force for good.

But the clock is ticking.

❌