Reading view

There are new articles available, click to refresh the page.

Regulating AI Is Easier Than You Think

Female engineer inspecting wafer chip in laboratory

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

[time-brightcove not-tgx=”true”]

In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world’s electricity, and only nine countries possess nuclear weapons.

Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world’s most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.

The most advanced AI systems are trained on tens of thousands of highly specialized computer chips. These chips are housed in massive data centers where they churn on data for months to train the most capable AI models. These advanced chips are difficult to produce, the supply chain is tightly controlled, and large numbers of them are needed to train AI models. 

Governments can establish a regulatory regime where only authorized computing providers are able to acquire large numbers of advanced chips in their data centers, and only licensed, trusted AI companies are able to access the computing power needed to train the most capable—and most dangerous—AI models. 

This may seem like a tall order. But only a handful of nations are needed to put this governance regime in place. The specialized computer chips used to train the most advanced AI models are only made in Taiwan. They depend on critical technology from three countries—Japan, the Netherlands, and the U.S. In some cases, a single company holds a monopoly on key elements of the chip production supply chain. The Dutch company ASML is the world’s only producer of extreme ultraviolet lithography machines that are used to make the most cutting-edge chips.

Read More: The 100 Most Influential People in AI 2024

Governments are already taking steps to govern these high-tech chips. The U.S., Japan, and the Netherlands have placed export controls on their chip-making equipment, restricting their sale to China. And the U.S. government has prohibited the sale of the most advanced chips—which are made using U.S. technology—to China. The U.S. government has also proposed requirements for cloud computing providers to know who their foreign customers are and report when a foreign customer is training a large AI model that could be used for cyberattacks. And the U.S. government has begun debating—but not yet put in place—restrictions on the most powerful trained AI models and how widely they can be shared. While some of these restrictions are about geopolitical competition with China, the same tools can be used to govern chips to prevent adversary nations, terrorists, or criminals from using the most powerful AI systems.

The U.S. can work with other nations to build on this foundation to put in place a structure to govern computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, training AI models, and the trained models that are the result of this production cycle. 

Japan, the Netherlands, and the U.S. can help lead the creation of a global governance framework that permits these highly specialized chips to only be sold to countries that have established regulatory regimes for governing computing hardware. This would include tracking chips and keeping account of them, knowing who is using them, and ensuring that AI training and deployment is safe and secure.

But global governance of computing hardware can do more than simply keep AI out of the hands of bad actors—it can empower innovators around the world by bridging the divide between computing haves and have nots. Because the computing requirements to train the most advanced AI models are so intense, the industry is moving toward an oligopoly. That kind of concentration of power is not good for society or for business.

Some AI companies have in turn begun publicly releasing their models. This is great for scientific innovation, and it helps level the playing field with Big Tech. But once the AI model is open source, it can be modified by anyone. Guardrails can be quickly stripped away.

The U.S. government has fortunately begun piloting national cloud computing resources as a public good for academics, small businesses, and startups. Powerful AI models could be made accessible through the national cloud, allowing trusted researchers and companies to use them without releasing the models on the internet to everyone, where they could be abused.  

Countries could even come together to build an international resource for global scientific cooperation on AI. Today, 23 nations participate in CERN, the international physics laboratory that operates the world’s most advanced particle accelerator. Nations should do the same for AI, creating a global computing resource for scientists to collaborate on AI safety, empowering scientists around the world.

AI’s potential is enormous. But to unlock AI’s benefits, society will also have to manage its risks. By controlling the physical inputs to AI, nations can securely govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.

HBO Sets Premiere Date for Hollywood Satire Series ‘The Franchise’ (TV News Roundup)

HBO has unveiled that its original Hollywood satire series “The Franchise” debuts on its linear channel and Max Oct. 6 at 10 p.m ET. The premiere episode of “The Franchise” is directed by Oscar-winner Sam Mendes. According to the official logline, “‘The Franchise’ follows the crew of an unloved franchise movie fighting for their place […]

How We Chose the TIME100 Most Influential People in AI 2024

TIME 100 AI list Time Magazine cover

As we were finishing this year’s TIME100 AI, I had two conversations, with two very different TIME100 AI honorees, that made clear the stakes of this technological transformation. Sundar Pichai, who joined Google in 2004 and became CEO of the world’s fourth most valuable company nine years ago, told me that introducing the company’s billions of users to artificial intelligence through Google’s products amounts to “one of the biggest improvements we’ve done in 20 years.” Speaking that same day, Meredith Whittaker, a former Google employee and critic of the company who, as the president of Signal, has become one of the world’s most influential advocates for privacy, expressed alarm at the dangers posed by the fact that so much of the AI revolution depends on the infrastructure and decisions of only a handful of big players in tech.

[time-brightcove not-tgx=”true”]

Our purpose in creating the TIME100 AI is to put leaders like Pichai and Whittaker in dialogue and to open up their views to TIME’s readers. That is why we are excited to share with you the second edition of the TIME100 AI. We built this program in the spirit of the TIME100, the world’s most influential community. TIME’s knowledgeable editors and correspondents, led by Emma Barker and Ayesha Javed, interviewed their sources and consulted members of last year’s list to find the best new additions to our community of AI leaders. Ninety-one of the members of the 2024 list were not on last year’s, an indication of just how quickly this field is changing. They span dozens of companies, regions, and perspectives, including 15-year-old Francesca Mani, who advocates across the U.S. for protections for victims of deepfakes, and 77-year-old Andrew Yao, one of China’s most prominent computer scientists, who called last fall for an international regulatory body for AI.


Just two months after we launched last year’s list, we witnessed one of the most dramatic recent events in the business world, a moment that drew the world’s attention to the individuals leading AI. In November 2023, OpenAI’s board shocked the industry by firing CEO Sam Altman amid questions about his integrity. After his subsequent return to the company, Altman was recognized as TIME’s 2023 CEO of the Year. Since then, several top safety leaders have left OpenAI, raising concern over the lab’s—and the industry’s—pace of development. OpenAI has promised to refocus on increased caution, installing a new safety committee, which it has said will assess the company’s approach. Safety concerns animate many of the individuals recognized in this issue.

If the world of AI was dominated by the emergence of startup labs like OpenAI, Anthropic, and their competitors in 2023, this year, as critics and champions alike have noted, we’ve seen the outsize influence of a small number of tech giants. Without them, upstart AI companies would not have the funding and computing power—known as compute—they need to propel their rapid acceleration.

This year’s list offers examples of the possibilities for AI when it moves out of the lab and into the world. Innovators including Zack Dvey-Aharon at AEYE Health and Figure’s Brett Adcock are showing the real-world potential for AI to improve how we live and work. Many industries, including media companies like TIME, are now partnering with leading AI companies to explore new business models and opportunities. The consequences of those moves will likely determine who appears on next year’s list.

Since launching the TIME100 AI last September, we’ve been able to gather members of this group together in San Francisco and Dubai. We look forward to convening this group again in San Francisco and London later this fall as we continue to grow this community.

Buy your copy of the TIME100 AI issue here

[video id=kDTs1aRd autostart="viewable"]
❌