Reading view

There are new articles available, click to refresh the page.

Has AI Progress Really Slowed Down?

Everyday Life And Economy In Krakow

For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasn’t merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements—regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term “scaling laws,” which has since become a touchstone of the industry.

[time-brightcove not-tgx=”true”]

This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today’s articulate chatbots.

But now, that bigger-is-better gospel is being called into question. 

Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same “intelligence improvements.” 

What are tech companies saying?

Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said “we haven’t seen any signs of deviations from scaling laws.” OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Google’s Gemini model took GPT-4o’s top spot on a popular AI-performance leaderboard, the company’s CEO, Sundar Pichai posted to X saying “more to come.”

Read more: The Researcher Trying to Glimpse the Future of AI

Recent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the company’s largest model, Opus, which has not received such updates. In June, the company said Opus would be updated “later this year,” but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company’s larger Gemini Ultra model has yet to receive an update. OpenAI’s recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called “GPT-4o with reasoning” internally, suggesting the underlying model is similar in scale to GPT-4. 

Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, “we’ve failed deeply as a company,” Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskever’s new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.

“They had these things they thought were mathematical laws and they’re making predictions relative to those mathematical laws and the systems are not meeting them,” says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally “hit a wall”—something he’s warned could happen since 2022. “I didn’t know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck,” he says.

Have we run out of data?

A slowdown could be a reflection of the limits of current deep learning techniques, or simply that “there’s not enough fresh data anymore,” Marcus says. It’s a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text data’s limitations. “I think it’s like that with language models,” she says. 

The lack of data is particularly acute in certain domains like reasoning and mathematics, where we “just don’t have that much high quality data,” says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesn’t mean scaling is likely to stop—just that scaling alone might be insufficient. “At every order of magnitude scale up, different innovations have to be found,” he says, noting that it does not mean AI progress will slow overall. 

Read more: Is AI About to Run Out of Data? The History of Oil Says No

It’s not the first time critics have pronounced scaling dead. “At every stage of scaling, there are always arguments,” Amodei said last week. “The latest one we have today is, ‘we’re going to run out of data, or the data isn’t high quality enough or models can’t reason.,” “…I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue,” he said. Reflecting on OpenAI’s early days on Y-Combinator’s podcast, company CEO Sam Altman partially credited the company’s success with a “religious level of belief” in scaling—a concept he says was considered “heretical” at the time. In response to a recent post on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying “there is no wall.”

Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. “They expected AI was going to be able to, already write a PhD thesis,” he says. “Maybe it feels a bit.. anti-climactic.”

A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. “We tend to forget that GPT three from GPT four was like 100x scale in compute,” Sevilla says. “If you want to do something like 100 times bigger than GPT-4, you’re gonna need up to a million GPUs,” Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musk’s 100,000 GPU supercomputer in Memphis—the largest of its kind—which was reportedly built from start to finish in three months. 

In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAI’s o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. “This is something we already knew was possible,” Sevilla says, gesturing to an Epoch AI report published in July 2023. 

Read more: Elon Musk’s New AI Data Center Raises Alarms Over Pollution

Policy and geopolitical implications

Prematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4’s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. It’s an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.

Much of the U.S.’s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Biden’s sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit China’s access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing China’s AI progress.

“The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. “stalls at the frontier,” he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.’s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.

Whether we’re seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. “It’s unclear to me that a few months is a substantial enough reference point,” Singer says. “You could hit a plateau and then hit extremely rapid gains.”

What Donald Trump’s Win Means For AI

Republican Presidential Nominee Former President Trump Holds Rally In Butler, Pennsylvania

When Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different.

AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, “It is possible that we will have superintelligence in a few thousand days,” but also noted that “it may take longer.” Meanwhile, Meta CEO Mark Zuckerberg sees the arrival of these systems as more of a gradual process rather than a single moment.

[time-brightcove not-tgx=”true”]

Either way, such advances could have far-reaching implications for national security, the economy, and the global balance of power.

Read More: When Might AI Outsmart Us? It Depends Who You Ask

Trump’s own pronouncements on AI have fluctuated between awe and apprehension. In a June interview on Logan Paul’s Impaulsive podcast, he described AI as a “superpower” and called its capabilities “alarming.” And like many in Washington, he views the technology through the lens of competition with China, which he sees as the “primary threat” in the race to build advanced AI.

Yet even his closest allies are divided on how to govern the technology: Musk has long voiced concerns about AI’s existential risks, while J.D. Vance, Trump’s Vice President, sees such warnings from industry as a ploy to usher regulations that would “entrench the tech incumbents.” These divisions among Trump’s confidants hint at the competing pressures that will shape AI policy during Trump’s second term.

Undoing Biden’s AI legacy

Trump’s first major AI policy move will likely be to repeal President Joe Biden’s Executive Order on AI. The sweeping order, signed in October 2023, sought to address threats the technology could pose to civil rights, privacy, and national security, while promoting innovation, competition, and the use of AI for public services.

Trump promised to repeal the Executive Order on the campaign trail in December 2023, and this position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing “radical leftwing ideas” on the technology’s development.

Read more: Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

Sections of the Executive Order which focus on racial discrimination or inequality are “not as much Trump’s style,” says Dan Hendrycks, executive and research director of the Center for AI Safety. While experts have criticized any rollback of bias protections, Hendrycks says the Trump Administration may preserve other aspects of Biden’s approach. “I think there’s stuff in [the Executive Order] that’s very bipartisan, and then there’s some other stuff that’s more specifically Democrat-flavored,” Hendrycks says.

“It would not surprise me if a Trump executive order on AI maintained or even expanded on some of the core national security provisions within the Biden Executive Order, building on what the Department of Homeland Security has done for evaluating cybersecurity, biological, and radiological risks associated with AI,” says Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.

The fate of the U.S. AI Safety Institute (AISI), an institution created last November by the Biden Administration to lead the government’s efforts on AI safety, also remains uncertain. In August, the AISI signed agreements with OpenAI and Anthropic to formally collaborate on AI safety research, and on the testing and evaluation of new models. “Almost certainly, the AI Safety Institute is viewed as an inhibitor to innovation, which doesn’t necessarily align with the rest of what appears to be Trump’s tech and AI agenda,” says Keegan McBride, a lecturer in AI, government, and policy at the Oxford Internet Institute. But Hammond says that while some fringe voices would move to shutter the institute, “most Republicans are supportive of the AISI. They see it as an extension of our leadership in AI.”

Read more: What Trump’s Win Means for Crypto

Congress is already working on protecting the AISI. In October, a broad coalition of companies, universities, and civil society groups—including OpenAI, Lockheed Martin, Carnegie Mellon University, and the nonprofit Encode Justice—signed a letter calling on key figures in Congress to urgently establish a legislative basis for the AISI. Efforts are underway in both the Senate and the House of Representatives, and both reportedly have “pretty wide bipartisan support,” says Hamza Chaudhry, U.S. policy specialist at the nonprofit Future of Life Institute.

America-first AI and the race against China

Trump’s previous comments suggest that maintaining the U.S.’s lead in AI development will be a key focus for his Administration.“We have to be at the forefront,” he said on the Impaulsive podcast in June. “We have to take the lead over China.” Trump also framed environmental concerns as potential obstacles, arguing they could “hold us back” in what he views as the race against China.

Trump’s AI policy could include rolling back regulations to accelerate infrastructure development, says Dean Ball, a research fellow at George Mason University. “There’s the data centers that are going to have to be built. The energy to power those data centers is going to be immense. I think even bigger than that: chip production,” he says. “We’re going to need a lot more chips.” While Trump’s campaign has at times attacked the CHIPS Act, which provides incentives for chip makers manufacturing in the U.S, leading some analysts to believe that he is unlikely to repeal the act. 

Read more: What Donald Trump’s Win Means for the Economy

Chip export restrictions are likely to remain a key lever in U.S. AI policy. Building on measures he initiated during his first term—which were later expanded by Biden—Trump may well  strengthen controls that curb China’s access to advanced semiconductors. “It’s fair to say that the Biden Administration has been pretty tough on China, but I’m sure Trump wants to be seen as tougher,” McBride says. It is “quite likely” that Trump’s White House will “double down” on export controls in an effort to close gaps that have allowed China to access chips, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. “The overwhelming majority of people on both sides think that the export controls are important,” he says.

The rise of open-source AI presents new challenges. China has shown it can leverage U.S. systems, as demonstrated when Chinese researchers reportedly adapted an earlier version of Meta’s Llama model for military applications. That’s created a policy divide. “You’ve got people in the GOP that are really in favor of open-source,” Ball says. “And then you have people who are ‘China hawks’ and really want to forbid open-source at the frontier of AI.”

“My sense is that because a Trump platform has so much conviction in the importance and value of open-source I’d be surprised to see a movement towards restriction,” Singer says.

Despite his tough talk, Trump’s deal-making impulses could shape his policy towards China. “I think people misunderstand Trump as a China hawk. He doesn’t hate China,” Hammond says, describing Trump’s “transactional” view of international relations. In 2018, Trump lifted restrictions on Chinese technology company ZTE in exchange for a $1.3 billion fine and increased oversight. Singer sees similar possibilities for AI negotiations, particularly if Trump accepts concerns held by many experts about AI’s more extreme risks, such as the chance that humanity may lose control over future systems.

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

Trump’s coalition is divided over AI

Debates over how to govern AI reveal deep divisions within Trump’s coalition of supporters. Leading figures, including Vance, favor looser regulations of the technology. Vance has dismissed AI risk as an industry ploy to usher in new regulations that would “make it actually harder for new entrants to create the innovation that’s going to power the next generation of American growth.”

Silicon Valley billionaire Peter Thiel, who served on Trump’s 2016 transition team, recently cautioned against movements to regulate AI. Speaking at the Cambridge Union in May, he said any government with the authority to govern the technology would have a “global totalitarian character.” Marc Andreessen, the co-founder of prominent venture capital firm Andreessen Horowitz, gave $2.5 million to a pro-Trump super political action committee, and an additional $844,600 to Trump’s campaign and the Republican Party.

Yet, a more safety-focused perspective has found other supporters in Trump’s orbit. Hammond, who advised on the AI policy committee for Project 2025, a proposed policy agenda led by right-wing think tank the Heritage Foundation, and not officially endorsed by the Trump campaign, says that “within the people advising that project, [there was a] very clear focus on artificial general intelligence and catastrophic risks from AI.”

Musk, who has emerged as a prominent Trump campaign ally through both his donations and his promotion of Trump on his platform X (formerly Twitter), has long been concerned that AI could pose an existential threat to humanity. Recently, Musk said he believes there’s a 10% to 20% chance that AI “goes bad.” In August, Musk posted on X supporting the now-vetoed California AI safety bill that would have put guardrails on AI developers. Hendrycks, whose organization co-sponsored the California bill, and who serves as safety adviser at xAI, Musk’s AI company, says “If Elon is making suggestions on AI stuff, then I expect it to go well.” However, “there’s a lot of basic appointments and groundwork to do, which makes it a little harder to predict,” he says.

Trump has acknowledged some of the national security risks of AI. In June, he said he feared deepfakes of a U.S. President threatening a nuclear strike could prompt another state to respond, sparking a nuclear war. He also gestured to the idea that an AI system could “go rogue” and overpower humanity, but took care to distinguish this position from his personal view. However, for Trump, competition with China appears to remain the primary concern.

Read more: Trump Worries AI Deepfakes Could Trigger Nuclear War

But these priorities aren’t necessarily at odds and AI safety regulation does not inherently entail ceding ground to China, Hendrycks says. He notes that safeguards against malicious use require minimal investment from developers. “You have to hire one person to spend, like, a month or two on engineering, and then you get your jailbreaking safeguards,” he says. But with these competing voices shaping Trump’s AI agenda, the direction of Trump’s AI policy agenda remains uncertain.

“In terms of which viewpoint President Trump and his team side towards, I think that is an open question, and that’s just something we’ll have to see,” says Chaudhry. “Now is a pivotal moment.”

How AI Is Being Used to Respond to Natural Disasters in Cities

TURKEY-SYRIA-QUAKE

The number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger. Meanwhile, the strength and frequency of extreme weather events has increased—a trend set to continue as the climate warms. That is spurring efforts around the world to develop a new generation of earthquake monitoring and climate forecasting systems to make detecting and responding to disasters quicker, cheaper, and more accurate than ever.

[time-brightcove not-tgx=”true”]

On Nov. 6, at the Barcelona Supercomputing Center​ in Spain, the Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management.

The initiative builds on nearly four years of groundwork laid by the International Telecommunications Union, the World Meteorological Organization (WMO) and the U.N. Environment Programme, which in early 2021 collectively convened a focus group to begin developing best practices for AI use in disaster management. These include enhancing data collection, improving forecasting, and streamlining communications.

Read more: Cities Are on the Front Line of the ‘Climate-Health Crisis.’ A New Report Provides a Framework for Tackling Its Effects

“What I find exciting is, for one type of hazard, there are so many different ways that AI can be applied and this creates a lot of opportunities,” says Monique Kuglitsch, who chaired the focus group. Take hurricanes for example: In 2023, researchers showed AI could help policymakers identify the best places to put traffic sensors to detect road blockages after tropical storms in Tallahassee, Fla. And in October, meteorologists used AI weather forecasting models to accurately predict that Hurricane Milton would land near Siesta Key, Florida. AI is also being used to alert members of the public more efficiently. Last year, The National Weather Service announced a partnership with AI translation company Lilt to help deliver forecasts in Spanish and simplified Chinese, which it says can reduce the time to translate a hurricane warning from an hour to 10 minutes.

Besides helping communities prepare for disasters, AI is also being used to coordinate response efforts. Following both Hurricane Milton and Hurricane Ian, non-profit GiveDirectly used Google’s machine learning models to analyze pre- and post-satellite images to identify the worst affected areas, and prioritize cash grants accordingly. Last year AI analysis of aerial images was deployed in cities like Quelimane, Mozambique, after Cyclone Freddy and Adıyaman, Turkey, after a 7.8 magnitude earthquake, to aid response efforts.

Read more: How Meteorologists Are Using AI to Forecast Hurricane Milton and Other Storms

Operating early warning systems is primarily a governmental responsibility, but AI climate modeling—and, to a lesser extent, earthquake detection—has become a burgeoning private industry. Start-up SeismicAI says it’s working with the civil protection agencies in the Mexican states of Guerrero and Jalisco to deploy an AI-enhanced network of sensors, which would detect earthquakes in real-time. Tech giants Google, Nvidia, and Huawei are partnering with European forecasters and say their AI-driven models can generate accurate medium-term forecasts thousands of times more quickly than traditional models, while being less computationally intensive. And in September, IBM partnered with NASA to release a general-purpose open-source model that can be used for various climate-modeling cases, and which runs on a desktop.

AI advances

While machine learning techniques have been incorporated into weather forecasting models for many years, recent advances have allowed many new models to be built using AI from the ground-up, improving the accuracy and speed of forecasting. Traditional models, which rely on complex physics-based equations to simulate interactions between water and air in the atmosphere and require supercomputers to run, can take hours to generate a single forecast. In contrast, AI weather models learn to spot patterns by training on decades of climate data, most of which was collected via satellites and ground-based sensors and shared through intergovernmental collaboration.

Both AI and physics-based forecasts work by dividing the world into a three-dimensional grid of boxes and then determining variables like temperature and wind speed. But because AI models are more computationally efficient, they can create much finer-grained grids. For example, the the European Centre for Medium-Range Weather Forecasts’ highest resolution model breaks the world into 5.5 mile boxes, whereas forecasting startup Atmo offers models finer than one square mile. This bump in resolution can allow for more efficient allocation of resources during extreme weather events, which is particularly important for cities, says Johan Mathe, co-founder and CTO of the company, which earlier this year inked deals with the Philippines and the island nation of Tuvalu.

Limitations

AI-driven models are typically only as good as the data they are trained on, which can be a limiting factor in some places. “When you’re in a really high stakes situation, like a disaster, you need to be able to rely on the model output,” says Kuglitsch. Poorer regions—often on the frontlines of climate-related disasters—typically have fewer and worse-maintained weather sensors, for example, creating gaps in meteorological data. AI systems trained on this skewed data can be less accurate in the places most vulnerable to disasters. And unlike physics-based models, which follow set rules, as AI models become more complex, they increasingly operate as sophisticated ‘black boxes,’ where the path from input to output becomes less transparent. The U.N. initiative’s focus is on developing guidelines for using AI responsibly. Kuglitsch says standards could, for example, encourage developers to disclose a model’s limitations or ensure systems work across regional boundaries.

The initiative will test its recommendations in the field by collaborating with the Mediterranean and pan-European forecast and Early Warning System Against natural hazards (MedEWSa), a project that spun out of the focus group. “We’re going to be applying the best practices from the focus group and getting a feedback loop going, to figure out which of the best practices are easiest to follow,” Kuglitsch says. One MedEWSa pilot project will explore machine learning to predict the occurrence of wildfires an area around Athens, Greece. Another will use AI to improve flooding and landslide warnings in the area surrounding Tbilisi city, Georgia.

Read more: How the Cement Industry Is Creating Carbon-Negative Building Materials

Meanwhile, private companies like Tomorrow.io are seeking to plug these gaps by collecting their own data. The AI weather forecasting start-up has launched satellites with radar and other meteorological sensors to collect data from regions that lack ground-based sensors, which it combines with historical data to train its models. Tomorrow.io’s technology is being used by New England cities including Boston, to help city officials decide when to salt the roads ahead of snowfall. It’s also used by Uber and Delta Airlines.

Another U.N. initiative, the Systematic Observations Financing Facility (SOFF), also aims to close the weather data gap by providing financing and technical assistance in poorer countries. Johan Stander, director of services for the WMO, one of SOFF’s partners, says the WMO is working with private AI developers including Google and Microsoft, but stresses the importance of not handing off too much responsibility to AI systems.

“You can’t go to a machine and say, ‘OK, you were wrong. Answer me, what’s going on?’ You still need somebody to take that ownership,” he says. He sees private companies’ role as “supporting the national met services, instead of trying to take them over.”

What Teenagers Really Think About AI

teenagers-ai-risk

American teenagers believe addressing the potential risks of artificial intelligence should be a top priority for lawmakers, according to a new poll that provides the first in-depth look into young people’s concerns about the technology.

The poll, carried out by youth-led advocacy group the Center for Youth and AI and polling organization YouGov, and shared exclusively with TIME, reveals a level of concern that rivals long standing issues like social inequality and climate change.

[time-brightcove not-tgx=”true”]

The poll of 1,017 U.S. teens aged 13 to 18 was carried out in late July and early August, and found that 80% of respondents believed it was “extremely” or “somewhat” important for lawmakers to address the risks posed by AI, falling just below healthcare access and affordability in terms of issues they said were a top priority. That surpassed social inequality (78%) and climate change (77%).

Although the sample size is fairly small, it gives an insight into how young people are thinking about technology, which has often been embedded in their lives from an early age. “I think our generation has a unique perspective,” says Saheb Gulati, 17, who co-founded the Center for Youth and AI with Jason Hausenloy, 19. “That’s not in spite of our age, but specifically because of it.” Because today’s teens have grown up using digital technology, Gulati says, they have confronted questions of its societal impacts more than older generations.

Read More: 5 Steps Parents Should Take to Help Kids Use AI Safely

While there has been more research about how young people are using AI, for example to help or cheat with schoolwork, says Rachel Hanebutt, assistant professor at Georgetown University’s Thrive Center, who helped advise on the polls’ analysis, “Some of those can feel a little superficial and not as focused on what teens and young people think about AI and its role in their future, which I think is where this brings a lot of value.”

The findings show that nearly half of the respondents use ChatGPT or similar tools several times per week, aligning with another recent poll that suggests teens have embraced AI faster than their parents. But being early-adopters hasn’t translated into “full-throated optimism,” Hausenloy says.

Teens are at the heart of many debates over artificial intelligence, from the impact of social media algorithms to deep fake nudes. This week it emerged that a mother is suing Character.ai and Google after her son allegedly became obsessed with the chatbot before committing suicide. Yet, “ages 13 to 18 are not always represented in full political polls,” says Hanebutt. This research gives adults a better understanding of “what teens and young people think about AI and its role in their future,” rather than just how they’re using it, Hanebutt says. She notes the need for future polling that explores how teenagers expect lawmakers to act on the issue.

Read More: Column: How AI-Powered Tech Can Harm Children

While the poll didn’t ask about specific policies, it does offer insight into the AI risks of concern to the greatest number of teens, with immediate threats topping the list. AI-generated misinformation worried the largest proportion of respondents at 59%, closely followed by deepfakes at 58%. However, the poll reveals that many young people are also concerned about the technology’s longer term trajectory, with 47% saying they are concerned about the potential for advanced autonomous AI to escape human control. Nearly two-thirds said they consider the implications of AI when planning their career.

Hausenloy says that the poll is just the first step in the Center for Youth and AI’s ambitions to ensure young people are “represented, prepared and protected” when it comes to AI.

The poll suggests that, despite concerns in other areas, young people are generally supportive of AI-generated creative works. More than half of respondents (57%) were in favor of AI-generated art, film, and music, while only 26% opposed it. Less than a third of teens were concerned about AI copyright violations.

On the question of befriending AI, respondents were divided, with 46% saying AI companionship is acceptable compared with 44% saying it’s unacceptable. On the other hand, most teens (68%) opposed romantic relationships with AI, compared to only 24% who find them acceptable.

Read more: AI-Human Romances Are Flourishing—And This Is Just the Beginning

“This is the first and most comprehensive view on youth attitudes on AI I have ever seen,” says Sneha Revanur, founder and president of Encode Justice, a youth-led, AI-focused civil-society group, which helped advise on the survey questions. Revanur was the youngest participant at a White House roundtable about AI back in July 2023, and more recently the youngest to participate in the 2024 World Economic Forum in Davos.

In the past, she says Encode Justice was speaking on behalf of their generation without hard numbers to back them, but “we’ll be coming into future meetings with policymakers armed with this data, and armed with the fact that we do actually have a fair amount of young people who are thinking about these risks.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

She points to the California Senate Bill 1047—which would have required AI companies to implement safety measures to protect the public from potential harms from their technology—as a case where public concerns about the technology were overlooked. “In California, we just saw Governor Gavin Newsom veto a sweeping AI safety bill that was supported by a broad coalition, including our organization, Anthropic, Elon Musk, actors in Hollywood and labor unions,” Revanur says. “That was the first time that we saw this splintering in the narrative that the public doesn’t care about AI policy. And I think that this poll is actually just one more crack in that narrative.”

Why Sam Altman Is Leaving OpenAI’s Safety Committee

US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Exclusive: Renowned Experts Pen Support for California’s Landmark AI Safety Bill

Senate Judiciary Subcommittee Hearing On Oversight Of Artificial Intelligence

On August 7, a group of renowned professors co-authored a letter urging key lawmakers to support a California AI bill as it enters the final stages of the state’s legislative process. In a letter shared exclusively with TIME, Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell argue that the next generation of AI systems pose “severe risks” if “developed without sufficient care and oversight,” and describe the bill as the “bare minimum for effective regulation of this technology.”

[time-brightcove not-tgx=”true”]

The bill, titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Senator Scott Wiener in February of this year. It requires AI companies training large-scale models to conduct rigorous safety testing for potentially dangerous capabilities and implement comprehensive safety measures to mitigate risks.

“There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,“ the four experts write.

The letter is addressed to the respective leaders of the legislative bodies the bill must pass through if it is to become law: Mike McGuire, the president pro tempore of California’s senate, where the bill passed in May; Robert Rivas, speaker of the state assembly, where the bill will face a vote later this month; and state Governor Gavin Newsom, who—if the bill passes in the assembly—must sign or veto the proposed legislation by the end of September.

With Congress gridlocked and Republicans pledging to reverse Biden’s AI executive order if elected in November, California—the world’s fifth-largest economy and home to many of the world’s leading AI developers—plays what the authors see as an “indispensable role” in regulating AI. If passed, the bill would apply to all companies operating in the state.

While polls suggest the bill is supported by the majority of Californians, it has been subject to harsh opposition from industry groups and tech investors, who claim it would stifle innovation, harm the open-source community, and “let China take the lead on AI development.” Venture capital firm Andreessen Horowitz has been particularly critical of the bill, setting up a website that urges citizens to write to the legislature in opposition. Others, such as startup incubator YCombinator, Meta’s Chief AI Scientist Yann LeCun, and Stanford professor Fei-Fei Li (whose new $1 billion startup has received funding from Andreessen Horowitz) have also been vocal in their opposition.

The pushback has centered around provisions in the bill which would compel developers to provide reasonable assurances that an AI model will not pose unreasonable risk of causing “critical harms,” such as aiding in the creation of weapons of mass destruction or causing severe damage to critical infrastructure. The bill would only apply to systems that both cost over $100 million dollars to train and are trained using an amount of computing power above a specified threshold. These dual requirements imply the bill would likely only affect the largest AI developers. “No currently existing system would be classified,” Lennart Heim, a researcher at the RAND Corporation’s Technology and Security Policy Center, told TIME in June.

“As some of the experts who understand these systems most, we can say confidently that these risks are probable and significant enough to make safety testing and common-sense precautions necessary,” the authors of the letter write. Bengio and Hinton, who have previously supported the bill, are both winners of the Turing Award, and often referred to as “godfathers of AI,” alongside Yann LeCun. Russell has written a textbook—Artificial Intelligence: A Modern Approach—that is widely considered to be the standard textbook on AI. And Lessig, a Professor of Law at Harvard, is broadly regarded as a founding figure of Internet law and a pioneer in the free culture movement, having founded the Creative Commons and authored influential books on copyright and technology law. In addition to the risks noted above, they cite risks posed by autonomous AI agents that could act without human oversight among their concerns.

Read More: Yoshua Bengio Is on the 2024 TIME100 List

“I worry that technology companies will not solve these significant risks on their own while locked in their race for market share and profit maximization. That’s why we need some rules for those who are at the frontier of this race,” Bengio told TIME over email.  

The letter rejects the notion that the bill would hamper innovation, stating that as written, the bill only applies to the largest AI models; that large AI developers have already made voluntary commitments to undertake many of the safety measures outlined in the bill; and that similar regulations in Europe and China are in fact more restrictive than SB 1047. It also praises the bill for its “robust whistleblower protections” for AI lab employees who report safety concerns, which are increasingly seen as necessary given reports of reckless behavior on the part of some labs.

In an interview with Vox last month, Senator Wiener noted that the bill has already been amended in response to criticism from the open-source community. The current version exempts original developers from shutdown requirements once a model is no longer in their control, and limits their liability when others make significant modifications to their models, effectively treating significantly modified versions as new models. Despite this, some critics believe the bill would require open-source models to have a “kill switch.”

“Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation,” the letter says, noting that the bill does not have a licensing regime or require companies to receive permission from a government agency before training a model, and relies on self-assessments of risk. The authors further write: “It would be a historic mistake to strike out the basic measures of this bill.” 

Over email, Lessig adds “Governor Newsom will have the opportunity to cement California as a national first-mover in regulating AI. Legislation in California would meet an urgent need. With a critical mass of the top AI firms based in California, there is no better place to take an early lead on regulating this emerging technology.” 

What We Know About the New U.K. Government’s Approach to AI

Labour Party Conference 2023

When the U.K. hosted the world’s first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would “tip the balance in favor of humanity.” At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government’s effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world’s first AI Safety Institute—a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart. 

[time-brightcove not-tgx=”true”]

Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government. His approach to AI has been described as potentially tougher than Sunak’s.  

Starmer appointed Peter Kyle as science and technology minister, giving the lawmaker oversight of the U.K.’s AI policy at a crucial moment, as governments around the world grapple with how to foster innovation and regulate the rapidly developing technology. Following the election result, Kyle told the BBC that “unlocking the benefits of artificial intelligence is personal,” saying the advanced medical scans now being developed could have helped detect his late mother’s lung cancer before it became fatal.

Alongside the potential benefits of AI, the Labour government will need to balance concerns from the public. An August poll of over 4,000 members of the British public conducted by the Centre for Data Ethics and Innovation found 45% respondents believed AI taking people’s jobs represented one of the biggest risks posed by the technology; 34% believed loss in human creativity and problem solving was one of the greatest risks.

Here’s what we know so far about Labour’s approach to artificial intelligence.

Regulating AI

One of the key issues for the Labour government to tackle will likely be how to regulate AI companies and AI-generated content. Under the previous Conservative-led administration, the Department for Science, Innovation and Technology (DSIT) held off on implementing rules, saying that “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI,” in a 2024 policy paper about AI regulation. Labour has signaled a different approach, promising in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models,” suggesting a greater willingness to intervene in the rapidly evolving technology’s development.

Read More: U.S., U.K. Announce Partnership to Safety Test AI Models

Labour has also pledged to ban sexually explicit deepfakes. Unlike proposed legislation in the U.S., which would allow victims to sue those who create non-consensual deepfakes, Labour has considered a proposal by Labour Together, a think-tank with close ties to the current Labour Party, to impose restrictions on developers by outlawing so-called nudification tools

While AI developers have made agreements to share information with the AI Safety Institute on a voluntary basis, Kyle said in a February interview with the BBC that Labour would make that information-sharing agreement a “statutory code.”

Read More: To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy

“We would compel by law, those test data results to be released to the government,” Kyle said in the interview.

Timing regulation is a careful balancing act, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute.

“The art form is to be right on time with law. That means not too early, not too late,” she says. “The last thing that you want is a hastily thrown together policy that stifles innovation and does not protect human rights.”

Watchter says that striking the right balance on regulation will require the government to be in “constant conversation” with stakeholders such as those within the tech industry to ensure the government has an inside view of what is happening at the cutting edge of AI development when formulating policy. 

Kirsty Innes, director of technology policy at Labour Together points to the U.K. Online Safety Act, which was signed into law last October as a cautionary tale of regulation failing to keep pace with technology. The law, which aims to protect children from harmful content online, took 6 years from the initial proposal being made to finally being signed in.

“During [those 6 years] people’s experiences online transformed radically. It doesn’t make sense for that to be your main way of responding to changes in society brought by technology,” she says. “You’ve got to be much quicker about it now.”

Read More: The 3 Most Important AI Policy Milestones of 2023

There may be lessons for the U.K. to learn from the E.U. AI Act, Europe’s comprehensive regulatory framework passed in March, which will come into force on August 1 and become fully applicable to AI developers in 2026. Innes says that mimicking the E.U. is not Labour’s endgame. The European law outlines a tiered risk classification for AI use cases, banning systems deemed to pose unacceptable risks, such as social scoring systems, while placing obligations on providers of high-risk applications like those used for critical infrastructure. Systems said to pose limited or minimal risk face fewer requirements. Additionally, it sets out rules for “general-purpose AI”, which are systems with a wide range of uses, like those underpinning chatbots such as OpenAI’s ChatGPT. General-purpose systems trained on large amounts of computing power—such as GPT-4—are said to pose “systemic risk,” and developers will be required to perform risk assessments as well as track and report serious incidents.

“I think there is an opportunity for the U.K. to tread a nuanced middle ground somewhere between a very hands-off U.S. approach and a very regulatory heavy E.U. approach,” says Innes.

Read More: There’s an AI Lobbying Frenzy in Washington. Big Tech Is Dominating

In a bid to occupy that middle ground, Labour has pledged to create what it calls the Regulatory Innovation Office, a new government body that will aim to accelerate regulatory decisions.

“Part of the idea of the Regulatory Innovation Office is to help regulators develop the capacity that they need a bit quicker and to give them the kind of stimulus and the nudge to be more agile,” says Innes.

A ‘pro-innovation’ approach

In addition to helping the government respond more quickly to the fast-moving technology, Labour says the “pro-innovation” regulatory body will speed up approvals to help new technologies get licensed faster. The party said in its manifesto that it would implement AI into healthcare to “transform the speed and accuracy of diagnostic services, saving potentially thousands of lives.”

Healthcare is just one area where Kyle hopes to use AI. On July 8, he announced the revamp of the DSIT, which will bring on AI experts to explore ways to improve public services.

Meanwhile former Labour Prime Minister Tony Blair has encouraged the new government to embrace AI to improve the country’s welfare system. A July 9 report by his think tank the Tony Blair Institute for Global Change, concluded AI could save the U.K. Department for Work and Pensions more than $1 billion annually.

Blair has emphasized AI’s importance. “Leave aside the geopolitics, and war, and America and China, and all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other,” Blair said, speaking on the Dwarkesh Podcast in June.

Read more: How a New U.N. Advisory Group Wants to Transform AI Governance

Modernizing public services is part of Labour’s wider strategy to leverage AI to grow the U.K. tech sector. Other measures include making it easier to set up data centers in the U.K., creating a national data library to bring existing research programs together, and offering decade-long research and development funding cycles to support universities and start-ups.

Speaking to business and tech leaders in London last March, Kyle said he wanted to support “the next 10 DeepMinds to start up and scale up here within the U.K.” 

Workers’ rights

Artificial intelligence-powered tools can be used to monitor worker performance, such as grading call center-employees on how closely they stick to the script. Labour has committed to ensuring that new surveillance technologies won’t find their way into the workplace without consultation with workers. The party has also promised to “protect good jobs” but, beyond committing to engage with workers, has offered few details on how. 

Read More: As Employers Embrace AI, Workers Fret—and Seek Input

“That might sound broad brush, but actually a big failure of the last government’s approach was that the voice of the workforce was excluded from discussions,” says Nicola Smith, head of rights at the Trades Union Congress, a union-group.  

While Starmer’s new government has a number of urgent matters to prioritize, from setting out its legislative plan for year one to dealing with overcrowded prisons, the way it handles AI could have far-reaching implications.

“I’m constantly saying to my own party, the Labour Party [that] ‘you’ve got to focus on this technology revolution. It’s not an afterthought,” Blair said on the Dwarkesh Podcast in June. “It’s the single biggest thing that’s happening in the world today.”

Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

President Biden Delivers Remarks On Artificial Intelligence

On June 8, Republicans adopted a new party platform ahead of a possible second term for former President Donald Trump. Buried among the updated policy positions on abortion, immigration, and crime, the document contains a provision that has some artificial intelligence experts worried: it vows to scrap President Joe Biden’s executive order on AI.

[time-brightcove not-tgx=”true”]

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the platform reads.

Biden’s executive order on AI, signed last October, sought to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services. It requires developers of the most powerful AI systems to share their safety test results with the U.S. government and calls on federal agencies to develop guidelines for the responsible use of AI domains such as criminal justice and federal benefits programs.

Read More: Why Biden’s AI Executive Order Only Goes So Far

Carl Szabo, vice president of industry group NetChoice, which counts Google, Meta, and Amazon among its members, welcomes the possibility of the executive order’s repeal, saying, “It would be good for Americans and innovators.”

“Rather than enforcing existing rules that can be applied to AI tech, Biden’s Executive Order merely forces bureaucrats to create new, complex burdens on small businesses and innovators trying to enter the marketplace. Over-regulating like this risks derailing AI’s incredible potential for progress and ceding America’s technological edge to competitors like China,” said Szabo in a statement.

However, recent polling shared exclusively with TIME indicates that Americans on both sides of the political aisle are skeptical that the U.S. should avoid regulating AI in an effort to outcompete China. According to the poll conducted in late June by the AI Policy Institute (AIPI), 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”

Dan Hendrycks, director of the Center for Safe AI, says, “AI safety and risks to national security are bipartisan issues. Poll after poll shows Democrats and Republicans want AI safety legislation.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

The proposal to remove the guardrails put in place by Biden’s executive order runs counter to the public’s broad support for a measured approach to AI, and it has prompted concern among experts. Amba Kak, co-executive director of the AI Now Institute and former senior advisor on AI at the Federal Trade Commission, says Biden’s order was “one of the biggest achievements in the last decade in AI policy,” and that scrapping the order would “feel like going back to ground zero.” Kak says that Trump’s pledge to support AI development rooted in “human flourishing” is a subtle but pernicious departure from more established frameworks like human rights and civil liberties.

Ami Fields-Meyer, a former White House senior policy advisor on AI who worked on Biden’s executive order, says, “I think the Trump message on AI is, ‘You’re on your own,’” referring to how repealing the executive order would end provisions aimed at protecting people from bias or unfair decision-making from AI.

NetChoice and a number of think tanks and tech lobbyists have railed against the executive order since its introduction, arguing it could stifle innovation. In December, venture capitalist and prominent AI investor Ben Horowitz criticized efforts to regulate “math, FLOPs and R&D,” alluding to the compute thresholds set by Biden’s executive order. Horowitz said his firm would “support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.”

While Trump has previously accused tech companies like Google, Amazon, and Twitter of working against him, in June, speaking on Logan Paul’s podcast, Trump said that the “tech guys” in California gave him $12 million for his campaign. “They gave me a lot of money. They’ve never been into doing that,” Trump said.

The Trump campaign did not respond to a request for comment.

Even if Trump is re-elected and does repeal Biden’s executive order, some changes wouldn’t be felt right away. Most of the leading AI companies agreed to voluntarily share safety testing information with governments at an international summit on AI in Seoul last May, meaning that removing the requirements to share information under the executive order may not have an immediate effect on national security. But Fields-Meyer says, “If the Trump campaign believes that the rigorous national security safeguards proposed in the executive order are radical liberal ideas, that should be concerning to every American.”

Fields-Meyer says the back and forth over the executive order underscores the importance of passing federal legislation on AI, which “would bring a lot more stability to AI policy.” There are currently over 80 bills relating to AI in Congress, but it seems unlikely any of them will become law in the near future.

Sandra Wachter, a professor of technology regulation at the Oxford Internet Institute says Biden’s executive order was “a seminal step towards ensuring ethical AI and is very much on par with global developments in the UK, the EU, Canada, South Korea, Japan, Singapore and the rest of the world.” She says she worries it will be repealed before it has had a chance to have a lasting impact. “It would be a very big loss and a big missed opportunity if the framework was to be scrapped and AI governance to be reduced to a partisan issue,” she says. “This is not a political problem, this is a human problem—and a global one at that.”

Correction, July 11

The original version of this story misidentified a group that has spoken out against Biden’s executive order. It is NetChoice, not TechNet.

Meta Has Been Ordered to Stop Mining Brazilian Personal Data to Train Its AI

8th Viva Technology : Day One In Paris

Brazil’s national data protection authority has ordered Meta to halt the use of data originating from the country to train its AI models.

Meta’s current privacy policy enables the company to use data from its platforms, including Facebook, Instagram, and WhatsApp to train its artificial intelligence models. However, that practice will no longer be permitted in Brazil after its national data protection authority gave the company five days to change its policy on Tuesday.

[time-brightcove not-tgx=”true”]

Brazil said the company will need to confirm it has stopped using the data or face a daily non-compliance fine of $50,000 Brazilian Reals (almost $9000), citing “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects.”

Meta said it was “disappointed” with the Brazilian authority’s decision, saying it was a “step backward for innovation.”

“AI training is not unique to our services, and we’re more transparent than many of our industry counterparts who have been using public content to train their models and products,” the company tells TIME Wednesday, following the Brazilian authority’s decision.

The decision follows a report published in June by Human Rights Watch, which found that a popular dataset of images scrapped from online sources used to train image models, made by German nonprofit LAION, contained identifiable images of Brazilian children, which the report says places them at risk of deep fakes or other forms of exploitation. Human Rights Watch says they found 170 photos of children from at least 10 Brazilian states by reviewing less than 0.0001 percent of the images in the dataset.

Brazil is one of Meta’s biggest markets, with over 112 million Facebook users alone. In June at a conference in the South American country, Meta unveiled new AI tools for businesses on its WhatsApp platform.

The Brazilian authority said users were not sufficiently warned about the changes, and that the process for opting out was “not very intuitive.” Meta says their approach complies with local privacy laws, and that it will continue to address the Brazilian authority’s questions.

Brazil’s decision to stop Meta feeding user’s data into its AI models follows similar pushback in Europe. Last month, Meta delayed the launch of its AI services and paused plans to train its models on EU and U.K. data after receiving a complaint from the Irish privacy regulator. Meta is expected to push ahead with training in the U.S., which lacks federal online privacy protections.

Read more: Meta Faces Norwegian Complaint Over Plans to Train AI on User Images and Posts

This is not the first time Meta has found itself at odds with Brazilian authorities. In February, the company was barred from using its name in Brazil due to confusion with another company. Meta successfully overturned the decision in March.

❌