Reading view

There are new articles available, click to refresh the page.

Has AI Progress Really Slowed Down?

Everyday Life And Economy In Krakow

For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasn’t merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements—regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term “scaling laws,” which has since become a touchstone of the industry.

[time-brightcove not-tgx=”true”]

This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today’s articulate chatbots.

But now, that bigger-is-better gospel is being called into question. 

Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same “intelligence improvements.” 

What are tech companies saying?

Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said “we haven’t seen any signs of deviations from scaling laws.” OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Google’s Gemini model took GPT-4o’s top spot on a popular AI-performance leaderboard, the company’s CEO, Sundar Pichai posted to X saying “more to come.”

Read more: The Researcher Trying to Glimpse the Future of AI

Recent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the company’s largest model, Opus, which has not received such updates. In June, the company said Opus would be updated “later this year,” but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company’s larger Gemini Ultra model has yet to receive an update. OpenAI’s recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called “GPT-4o with reasoning” internally, suggesting the underlying model is similar in scale to GPT-4. 

Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, “we’ve failed deeply as a company,” Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskever’s new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.

“They had these things they thought were mathematical laws and they’re making predictions relative to those mathematical laws and the systems are not meeting them,” says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally “hit a wall”—something he’s warned could happen since 2022. “I didn’t know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck,” he says.

Have we run out of data?

A slowdown could be a reflection of the limits of current deep learning techniques, or simply that “there’s not enough fresh data anymore,” Marcus says. It’s a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text data’s limitations. “I think it’s like that with language models,” she says. 

The lack of data is particularly acute in certain domains like reasoning and mathematics, where we “just don’t have that much high quality data,” says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesn’t mean scaling is likely to stop—just that scaling alone might be insufficient. “At every order of magnitude scale up, different innovations have to be found,” he says, noting that it does not mean AI progress will slow overall. 

Read more: Is AI About to Run Out of Data? The History of Oil Says No

It’s not the first time critics have pronounced scaling dead. “At every stage of scaling, there are always arguments,” Amodei said last week. “The latest one we have today is, ‘we’re going to run out of data, or the data isn’t high quality enough or models can’t reason.,” “…I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue,” he said. Reflecting on OpenAI’s early days on Y-Combinator’s podcast, company CEO Sam Altman partially credited the company’s success with a “religious level of belief” in scaling—a concept he says was considered “heretical” at the time. In response to a recent post on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying “there is no wall.”

Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. “They expected AI was going to be able to, already write a PhD thesis,” he says. “Maybe it feels a bit.. anti-climactic.”

A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. “We tend to forget that GPT three from GPT four was like 100x scale in compute,” Sevilla says. “If you want to do something like 100 times bigger than GPT-4, you’re gonna need up to a million GPUs,” Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musk’s 100,000 GPU supercomputer in Memphis—the largest of its kind—which was reportedly built from start to finish in three months. 

In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAI’s o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. “This is something we already knew was possible,” Sevilla says, gesturing to an Epoch AI report published in July 2023. 

Read more: Elon Musk’s New AI Data Center Raises Alarms Over Pollution

Policy and geopolitical implications

Prematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4’s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. It’s an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.

Much of the U.S.’s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Biden’s sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit China’s access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing China’s AI progress.

“The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. “stalls at the frontier,” he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.’s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.

Whether we’re seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. “It’s unclear to me that a few months is a substantial enough reference point,” Singer says. “You could hit a plateau and then hit extremely rapid gains.”

U.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security Concerns

Gina Raimondo

“AI is a technology like no other in human history,” U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. “Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isn’t the smart thing to do.”

Raimondo’s remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.

[time-brightcove not-tgx=”true”]

Raimondo suggested participants keep two principles in mind: “We can’t release models that are going to endanger people,” she said. “Second, let’s make sure AI is serving people, not the other way around.”

Read More: How Commerce Secretary Gina Raimondo Became America’s Point Woman on AI

The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.

In a joint statement, the members of the International Network of AI Safety Institutes—which includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore—laid out their mission: “to be a forum that brings together technical expertise from around the world,” “…to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community,” and “…to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.”

In the lead-up to the convening, the U.S. AISI, which serves as the network’s inaugural chair, also announced a new government taskforce focused on the technology’s national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to “identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology,” with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.

The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Communist Party does not get to “write the rules of the road.” Earlier Wednesday, Chinese lab Deepseek announced a new “reasoning” model thought to be the first to rival OpenAI’s own reasoning model, o1, which the company says is “designed to spend more time thinking” before it responds.

On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a “Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” which the commission defined as “systems as good as or better than human capabilities across all cognitive domains” that “would surpass the sharpest human minds at every task.”

Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesday’s event, Anthropic CEO Dario Amodei—who believes AGI-like systems could arrive as soon as 2026—cited “loss of control” risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting “we also need to be really careful about how we do it.”

Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI model—the upgraded version of Anthropic’s Claude 3.5 Sonnet. The evaluation focused on assessing the model’s biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be “routinely circumvented,” which they noted is “consistent with prior research on the vulnerability of other AI systems’ safeguards.”

The San Francisco convening set out three priority topics that stand to “urgently benefit from international collaboration”: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.

While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, “to accelerate the design and implementation of frontier AI safety frameworks.” And in February, France will host its “AI Action Summit,” following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.

Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. “It has the potential to replace the human mind,” she said. “Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation. We need that virtuous cycle.”

What Trump’s Win Means for Crypto

Trump Bitcoin

This election cycle, the crypto industry poured over $100 million into races across the country, hoping to assert crypto’s relevancy as a voter issue and usher pro-crypto candidates into office. On Wednesday morning, almost all of the industry’s wishes came true. Republican candidate Donald Trump, who has lavished praise upon Bitcoin this year, won handily against his Democratic opponent Kamala Harris. And crypto PACs scored major wins in House and Senate races—most notably in Ohio, where Republican Bernie Moreno defeated crypto skeptic Sherrod Brown. 

[time-brightcove not-tgx=”true”]

As Trump’s numbers ascended on Tuesday night, Bitcoin hit a new record high, topping $75,000. Crypto-related stocks, including Robinhood Markets and MicroStrategy, also leapt upward. Enthusiasts now believe that Trump’s Administration will strip back regulation of the crypto industry, and that a favorable Congress will pass legislation that gives the industry more room to grow. 

“This is a huge victory for crypto,” Kristin Smith, the CEO of the Blockchain Association, a D.C.-based lobbying group, tells TIME. “I think we’ve really turned a corner, and we’ve got the right folks in place to get the policy settled once and for all.”

Trump’s crypto embrace

Many crypto fans supported Trump over Harris for several reasons. Trump spoke glowingly about crypto this year on the campaign trail, despite casting skepticism upon it for years. At the Bitcoin conference in Nashville in July, Trump floated the idea of establishing a federal Bitcoin reserve, and stressed the importance of bringing more Bitcoin mining operations to the U.S. 

Read More: Inside the Health Crisis of a Texas Bitcoin Town

Perhaps most importantly, Trump vowed to oust Gary Gensler, the chair of the Securities and Exchange Commission (SEC), who has brought many lawsuits against crypto projects for allegedly violating securities laws. Gensler is a widely-reviled figure in the crypto industry, with many  accusing him of stifling innovation. Gensler, conversely, argued that it was his job to protect consumers from the massive crypto collapses that unfolded in 2022, including Terra Luna and FTX

Gensler’s term isn’t up until 2026, but some analysts expect him to resign once Trump takes office, as previous SEC chairs have done after the President that appointed them lost their elections. A change to SEC leadership could allow many more crypto products to enter mainstream financial markets. For the past few years, the SEC had been hesitant to approve crypto ETFs: investment vehicles that allow people to bet on crypto without actually holding it. But a judge forced Gensler’s hand, bringing Bitcoin ETFs onto the market in January. Now, under a friendlier SEC, ETFs based on smaller cryptocurrencies like Solana and XRP may be next. 

Many crypto enthusiasts are also excited by Trump’s alliance with Elon Musk, who has long championed cryptocurrencies on social media. On election night, Dogecoin, Musk’s preferred meme coin, spiked 25% to 21 cents

Impact in the Senate

Crypto enthusiasts are also cheering the results in the Senate, which was the focus of most of the industry’s political contributions. Crypto PACs like Fairshake spent over $100 million dollars supporting pro-crypto candidates and opposing anti-crypto candidates, in the hopes of fomenting a new Congress that would pass legislation favorable to the industry. Centrally, lobbyists hoped for a bill that would turn over crypto regulation from the SEC to the Commodity Futures Trading Commission (CFTC), a much smaller agency. 

Read More: Crypto Is Pouring Cash Into the 2024 Elections. Will It Pay Off?

Crypto PACs particularly focused their efforts in Ohio, spending some $40 million to unseat Democrat Brown, the Senate Banking Committee Chair and a crypto critic. His opponent Moreno has been a regular attendee at crypto conferences and vowed to “lead the fight to defend crypto in the US Senate.” On Tuesday night, Moreno won, flipping control of the Senate. 

Defend American Jobs, a Crypto PAC affiliated with Fairshake, claimed credit for Brown’s defeat on Tuesday. “Elizabeth Warren ally Sherrod Brown was a top opponent of cryptocurrency and thanks to our efforts, he will be leaving the Senate,” spokesperson Josh Vlasto wrote in a statement. “Senator-Elect Moreno’s come-from-behind win shows that Ohio voters want a leader who prioritizes innovation, protects American economic interests, and will ensure our nation’s continued technological leadership.” 

Crypto PACs notched another victory in Montana, where their preferred candidate, Republican Tim Sheehy, defeated Democrat Jon Tester. 

The rise of prediction markets

Finally, crypto enthusiasts celebrated the accuracy of prediction markets, which allow users to bet on election results using crypto. Advocates claimed that prediction markets could be more accurate than polls, because they channeled the collective wisdom of people with skin in the game. Critics, on the other hand, dismissed them as being too volatile and based in personal sentiment and boosterism. 

For weeks, prediction markets had been far more favorable toward Trump than the polls, which portrayed Trump and Harris in a dead heat. (For example, Polymarket gave Trump a 62% chance of winning on Nov. 3.) And on election day, before any major results had been tabulated, prediction markets swung heavily towards Trump; the odds of Republicans sweeping the presidency, house and senate jumped to 44% on Kalshi. 

In the last couple months, bettors wagered over $2 billion on the presidential election on Polymarket, according to Dune Analytics. It’s still unclear whether prediction markets are actually more accurate than polls on average. But their success in this election will likely make their presence in the political arena only increase in years to come. 

Crypto’s future in the Trump era is far from guaranteed. Crypto prices are highly susceptible to global events, like Russia’s invasion of Ukraine, as well as larger macroeconomic trends. Fraudulent crypto projects like FTX, which thrived in deregulated environments, have also tanked prices in years past. Skeptics worry that more Americans being able to buy crypto will add volatility and risk to the American financial system. 

And it’s unclear how dedicated Trump actually is to crypto, or whether he will follow through on his pledges to the industry. “If he doesn’t deliver on these promises quickly, the euphoria could turn to disappointment, which has the potential to result in crypto market volatility,” Tim Kravchunovsky, founder and CEO of the decentralized telecommunications network Chirp, wrote to TIME. “We have to be prepared for this because the reality is that crypto isn’t the most important issue on Trump’s current agenda.” 

But for now, most crypto fans believe that a “bull run,” in which prices increase, is imminent, and that regulatory change is incoming. “I don’t think we’re going to see the same kind of hostility from the government, particularly members of Congress, as we have in the past,” says Smith. “ This is really positive news for all parts of the ecosystem.”

Andrew R. Chow’s book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.

TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a ‘Relic of the Past’

On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME’s 2023 and 2024 100 Influential People in AI lists—all of whom are playing a role in shaping the future of the technology.

[time-brightcove not-tgx=”true”]

Following a discussion between TIME’s CEO Jessica Sibley and executives from the event’s sponsors—Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia’s VP of Europe, the Middle East, and Africa—and after the main course had been served, attention turned to a panel discussion.

The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI’s impacts, and the potential of AI-generated videos to transform how we communicate.

The panelists’ views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to “get our heads around the empirical data that will tell us much more about what’s coming down the pike and what kind of risks are associated with it.”

Birhane, meanwhile, emphasized what she sees as the “massive hype” around AI’s capabilities and potential to pose existential risk. “These models don’t actually live up to their claims.” Birhane argued that “AI is not just computational calculations. It’s the entire pipeline that makes it possible to build and to sustain systems,” citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. “There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline,” she said. Riparbelli suggested that both “fixing the problems already in society today” and thinking about “Terminator-style scenarios” are important, and worth paying attention to.

Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, “best practices for how you deal with different safety challenges change very quickly,” Leung said, pointing to a “big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.”

The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. “There’s a huge onus on companies to continue to innovate on safety practices,” said Leung. Riparbelli agreed, suggesting companies may have a “moral imperative” to ensure their systems are safe. At the same time, “governments have to play a role here. That’s completely non-negotiable,” said Leung.

Equally, Birhane was clear that “effective regulation” based on “empirical evidence” is necessary. “A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain,” she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. “Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.”

When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. “What are the best practices, standards, and protocols that we want to harmonize across jurisdictions?” she asked. “It’s not a sufficiently-resourced question.” Still, Leung pointed to the fact that China was party to last year’s AI Safety Summit hosted by the U.K. as cause for optimism. “It’s very important to make sure that they’re around the table,” she said. 

One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his company’s technology, Riparbelli’s AI double declared “text as a technology is ultimately transitory and will become a relic of the past.” Expanding on the thought, the real Riparbelli said: “We’ve always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.” 

He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will “enable anyone to create a Hollywood film from their bedroom without needing more than their imagination,” he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that “every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.”

Riparbelli suggests that rather than a “technology-centric” approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. “Let’s focus on the things we don’t want to happen and regulate around those.”

The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.

At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

Inventor and futurist Ray Kurzweil, researcher and Brookings Institution fellow Chinasa T. Okolo, director of the U.S. Artificial Safety Institute (AISI) Elizabeth Kelly, and Cognizant CEO Ravi Kumar S, discussed the transformative power of AI during a panel at a TIME100 Impact Dinner in San Francisco on Monday. During the discussion, which was moderated by TIME’s editor-in-chief Sam Jacobs, Kurzweil predicted that we will achieve Artificial General Intelligence (AGI), a type of AI that might be smarter than humans, by 2029.

[time-brightcove not-tgx=”true”]

“Nobody really took it seriously until now,” Kurzweil said about AI. “People are convinced it’s going to either endow us with things we’d never had before, or it’s going to kill us.”

Cognizant sponsored Monday’s event, which celebrated the 100 most influential people leading change in AI. The TIME100 AI spotlights computer scientists, business leaders, policymakers, advocates, and others at the forefront of big changes in the industry. Jacobs probed the four panelists—three of whom were named to the 2024 list—about the opportunities and challenges presented by AI’s rapid advancement.

Kumar discussed the potential economic impact of generative AI and cited a new report from Cognizant which says that generative AI could add more than a trillion dollars annually to the US economy by 2032. He identified key constraints holding back widespread adoption, including the need for improved accuracy, cost-performance, responsible AI practices, and explainable outputs. “If you don’t get productivity,” he said, “task automation is not going to lead to a business case stacking up behind it.”

Okolo highlighted the growth of AI initiatives in Africa and the Global South, citing the work of professor Vukosi Marivate from the University of Pretoria in South Africa, who has inspired a new generation of researchers within and outside the continent. However, Okolo acknowledged the mixed progress in improving the diversity of languages informing AI models, with grassroots communities in Africa leading the charge despite limited support and funding.

Kurzweil said that he was excited about the potential of simulated biology to revolutionize drug discovery and development. By simulating billions of interactions in a matter of days, he noted, researchers can accelerate the process of finding treatments for diseases like cancer and Alzheimer’s. He also provided a long-term perspective on the exponential growth of computational power, predicting a sharper so-called S-curve (a slow start, then rapid growth before leveling off) for AI disruption compared to previous technological revolutions.

Read more: The TIME100 Most Influential People in AI 2024

Kelly addressed concerns about AI’s potential for content manipulation in the context of the 2024 elections and beyond. “It’s going to matter this year, but it’s going to matter every year more and more as we move forward,” she noted. She added that AISI is working to advance the science to detect synthetically created content and authenticate genuine information.

Kelly also noted that lawmakers have been focusing on AI’s risks and benefits for some time, with initiatives like the AI Bill of Rights and the AI Risk Management Framework. “The president likes to use the phrase ‘promise and peril,’ which I think pretty well captures it, because we are incredibly excited about stimulant biology and drug discovery and development while being aware of the flip side risks,” she said.

As the panel drew to a close, Okolo urged attendees, which included nearly 50 other past and present TIME100 AI honorees, to think critically about how they develop and apply AI and to try to ensure that it reaches people in underrepresented regions in a positive way. 

“A lot of times you talk about the benefits that AI has brought, you know, to people. And a lot of these people are honestly concentrated in one region of the world,” she said. “We really have to look back, or maybe, like, step back and think broader,” she implored, asking leaders in the industry to think about people from Africa to South America to South Asia and Southeast Asia. “How can they benefit from these technologies, without necessarily exploiting them in the process?”

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

At TIME100 Impact Dinner, AI Leaders Talk Reshaping the Future of AI

TIME hosted its inaugural TIME100 Impact Dinner: Leaders Shaping the Future of AI, in San Francisco on Monday evening. The event kicked off a weeklong celebration of the TIME100 AI, a list that recognizes the 100 most influential individuals in artificial intelligence across industries and geographies and showcases the technology’s rapid evolution and far-reaching impact. 

TIME CEO Jessica Sibley set the tone for the evening, highlighting the diversity and dynamism of the 2024 TIME100 AI list. With 91 newcomers from last year’s inaugural list and honorees ranging from 15 to 77 years old, the group reflects the field’s explosive growth and its ability to attract talent from all walks of life.

[time-brightcove not-tgx=”true”]

Read More: At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

The heart of the evening centered around three powerful toasts delivered by distinguished AI leaders, each offering a unique perspective on the transformative potential of AI and the responsibilities that come with it.

Reimagining power structures

Amba Kak, co-executive director of the AI Now Institute, delivered a toast that challenged attendees to look beyond the technical aspects of AI and consider its broader societal implications. Kak emphasized the “mirror to the world” quality of AI, reflecting existing power structures and norms through data and design choices.

“The question of ‘what kind of AI we want’ is really an opening to revisit the more fundamental question of ‘what is the kind of world we want, and how can AI get us there?’” Kak said. She highlighted the importance of democratizing AI decision-making, ensuring that those affected by AI systems have a say in their deployment.

Kak said she drew inspiration from frontline workers and advocates pushing back against the misuse of AI, including nurses’ unions staking their claim in clinical AI deployment and artists defending human creativity. Her toast served as a rallying cry for a more inclusive and equitable AI future.

[video id=hiE0IRej]

Amplifying creativity and breaking barriers

Comedian, filmmaker, and AI storyteller King Willonius emphasized AI’s role in lowering the bar for who can be creative and giving voice to underrepresented communities. Willonius shared his personal journey of discovery with AI-assisted music composition, illustrating how AI can unlock new realms of creative expression.

“AI doesn’t just automate—it amplifies,” he said. “It breaks down barriers, giving voices to those who were too often left unheard.” He highlighted the work of his company, Blerd Factory, in leveraging AI to empower creators from diverse backgrounds.

Willonius’ toast struck a balance between enthusiasm for AI’s creative potential and a call for responsible development. He emphasized the need to guide AI technology in ways that unite rather than divide, envisioning a future where AI fosters empathy and global connection.

[video id=78d6ibMo]

Accelerating scientific progress

AMD CEO Lisa Su delivered a toast that underscored AI’s potential to address major global challenges. Su likened the current AI revolution to the dawn of the industrial era or the birth of the internet, emphasizing the unprecedented pace of innovation in the field.

She painted a picture of AI’s transformative potential across various domains, from materials science to climate change research, and said that she was inspired by AI’s applications in healthcare, envisioning a future where AI accelerates disease identification, drug development, and personalized medicine.

“I can see the day when we accelerate our ability to identify diseases, develop therapeutics, and ultimately find cures for the most important illnesses in the world,” Su said. Her toast was a call to action for leaders to seize the moment and work collaboratively to realize AI’s full potential while adhering to principles of transparency, fairness, and inclusion.

[video id=Wau2OTyu]

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

President Biden Delivers Remarks On Artificial Intelligence

On June 8, Republicans adopted a new party platform ahead of a possible second term for former President Donald Trump. Buried among the updated policy positions on abortion, immigration, and crime, the document contains a provision that has some artificial intelligence experts worried: it vows to scrap President Joe Biden’s executive order on AI.

[time-brightcove not-tgx=”true”]

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the platform reads.

Biden’s executive order on AI, signed last October, sought to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services. It requires developers of the most powerful AI systems to share their safety test results with the U.S. government and calls on federal agencies to develop guidelines for the responsible use of AI domains such as criminal justice and federal benefits programs.

Read More: Why Biden’s AI Executive Order Only Goes So Far

Carl Szabo, vice president of industry group NetChoice, which counts Google, Meta, and Amazon among its members, welcomes the possibility of the executive order’s repeal, saying, “It would be good for Americans and innovators.”

“Rather than enforcing existing rules that can be applied to AI tech, Biden’s Executive Order merely forces bureaucrats to create new, complex burdens on small businesses and innovators trying to enter the marketplace. Over-regulating like this risks derailing AI’s incredible potential for progress and ceding America’s technological edge to competitors like China,” said Szabo in a statement.

However, recent polling shared exclusively with TIME indicates that Americans on both sides of the political aisle are skeptical that the U.S. should avoid regulating AI in an effort to outcompete China. According to the poll conducted in late June by the AI Policy Institute (AIPI), 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”

Dan Hendrycks, director of the Center for Safe AI, says, “AI safety and risks to national security are bipartisan issues. Poll after poll shows Democrats and Republicans want AI safety legislation.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

The proposal to remove the guardrails put in place by Biden’s executive order runs counter to the public’s broad support for a measured approach to AI, and it has prompted concern among experts. Amba Kak, co-executive director of the AI Now Institute and former senior advisor on AI at the Federal Trade Commission, says Biden’s order was “one of the biggest achievements in the last decade in AI policy,” and that scrapping the order would “feel like going back to ground zero.” Kak says that Trump’s pledge to support AI development rooted in “human flourishing” is a subtle but pernicious departure from more established frameworks like human rights and civil liberties.

Ami Fields-Meyer, a former White House senior policy advisor on AI who worked on Biden’s executive order, says, “I think the Trump message on AI is, ‘You’re on your own,’” referring to how repealing the executive order would end provisions aimed at protecting people from bias or unfair decision-making from AI.

NetChoice and a number of think tanks and tech lobbyists have railed against the executive order since its introduction, arguing it could stifle innovation. In December, venture capitalist and prominent AI investor Ben Horowitz criticized efforts to regulate “math, FLOPs and R&D,” alluding to the compute thresholds set by Biden’s executive order. Horowitz said his firm would “support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.”

While Trump has previously accused tech companies like Google, Amazon, and Twitter of working against him, in June, speaking on Logan Paul’s podcast, Trump said that the “tech guys” in California gave him $12 million for his campaign. “They gave me a lot of money. They’ve never been into doing that,” Trump said.

The Trump campaign did not respond to a request for comment.

Even if Trump is re-elected and does repeal Biden’s executive order, some changes wouldn’t be felt right away. Most of the leading AI companies agreed to voluntarily share safety testing information with governments at an international summit on AI in Seoul last May, meaning that removing the requirements to share information under the executive order may not have an immediate effect on national security. But Fields-Meyer says, “If the Trump campaign believes that the rigorous national security safeguards proposed in the executive order are radical liberal ideas, that should be concerning to every American.”

Fields-Meyer says the back and forth over the executive order underscores the importance of passing federal legislation on AI, which “would bring a lot more stability to AI policy.” There are currently over 80 bills relating to AI in Congress, but it seems unlikely any of them will become law in the near future.

Sandra Wachter, a professor of technology regulation at the Oxford Internet Institute says Biden’s executive order was “a seminal step towards ensuring ethical AI and is very much on par with global developments in the UK, the EU, Canada, South Korea, Japan, Singapore and the rest of the world.” She says she worries it will be repealed before it has had a chance to have a lasting impact. “It would be a very big loss and a big missed opportunity if the framework was to be scrapped and AI governance to be reduced to a partisan issue,” she says. “This is not a political problem, this is a human problem—and a global one at that.”

Correction, July 11

The original version of this story misidentified a group that has spoken out against Biden’s executive order. It is NetChoice, not TechNet.

S.F. Federal Reserve Bank President Mary Daly Believes AI Can Boost the Labor Market

In an exclusive interview with TIME, San Francisco Federal Reserve president and chief executive Mary Daly said that the explosion of artificial intelligence (AI) could improve the labor market in the long-term and make workers more productive, even as workers fear the rising technology will change or eliminate their jobs.

“Jobs are being created, as well as jobs being replaced,” Daly said of AI. “If we can get people to upskill or reskill to take the jobs that are being created, we’ll have a very successful and growing economy. But that’s the burden on us—to make sure that everyone can participate in this changing technological development.”

[time-brightcove not-tgx=”true”]

TIME sat down with Daly at the Aspen Ideas Festival on June 28 to discuss the nation’s monetary policy, a potential softening in the labor market, the role of AI in the workforce, and more.

This interview has been condensed and edited for clarity.

TIME: Tell me a bit about your role as a Federal Reserve Bank president. What does your day-to-day look like?

Mary Daly: I have many different responsibilities. I have about 2,000 employees that work in five locations and we cover the nine western states. We do everything in those businesses and facilities from processing cash to making sure currency is clean and efficiently processed to bank supervision and information technology. Then, of course, [we also do] economic research and policy and community engagement. With nine states, it’s a large geography. So another part of my responsibilities is just get out in the district and talk to people, talk to CEOs. I work with community groups and government officials and worker groups to understand the lived experience in the areas they are in. [Then I’m able] to bring that insight and intelligence back to my decisions when I make monetary policy. 

The monetary policy component is studying, learning, and analyzing—and going to D.C. for the FOMC meetings. Our work is only as good as the American people’s ability to understand it. Trust is one of our most important tools as we navigate to get to price stability and full employment. And so that’s a big part of my job as well.

I know you’re very passionate about Zip Code Economies. How does it factor into your role as a Federal Reserve Bank president?

So I started Zip Code Economies [a podcast] some years back, because I recognized something really important that didn’t ring true. When I travel all around the district and the United States, when I get to the zip code level—the community level—I see people who have hope, who maybe do not have the circumstances they want today, but they work in a community to make their circumstances better. When you read the economics research, it says that zip code is destiny. So there’s this big gap between how people living in the zip codes feel about their own lives and futures and how economists say zip code is destiny.

We set out to ask people: “What does your community mean to you?” And what we found is that…when you go and talk to people, they tell you that yes, things can be challenging. Here’s our way of making it better. Here’s what makes our situation better. We’re trying to leave the world better than when we found it. Here’s how we’re doing that. I wanted to tell those stories. So now Zip Code Economies is just a place where I shepherd the voices of others and they tell their own stories.

Pivoting to interest rates, do you think the Fed’s next move is going to be a hike or a cut? And how are you thinking about that policy moving forward?

Let me start with how we are thinking about the policy because it’ll explain why you won’t hear me talking about what I’m doing at the next meeting [on July 30]. There’s many more pieces of data that come in before we next meet. So we’re at a point right now where the risks to the economy are roughly balanced on inflation and on the labor market. We have two mandates: full employment and price stability. For a long time now, we’ve been fighting to get inflation down. Employment has been good, the labor markets have been healthy. We’re making progress on that journey. Monetary policy is working: the interest rates are higher, and you’re seeing that put a slowdown in the economy and bring inflation down.

We have to think about also making sure that we don’t give people low inflation and take away full employment. Because people want both things:price stability and jobs. What we’re doing is remaining thoughtful as we look at the data. We think about how to get confidence about where the economy is headed, and make policy adjustments only when we feel confident. Our policies are in a very good place right now so we don’t need to act urgently. We need to act thoughtfully. And so I don’t have a view of what we’ll do next time that’s so well formed that I would state it because it depends on how the economy evolves. I actually said earlier this week that right now optimal policy is conditional policy. If inflation stays high and sticky, then we’ll hold the rate for longer. If inflation comes down as we project, then we’ll gradually normalize. If inflation comes down more rapidly than we expect or the labor market starts to weaken more than expected, then we will make adjustments more rapidly. So those are the scenarios and the data will tell us which one we’re seeing come out to play, and then we will react accordingly.

Read More: Will AI Take Your Job? Maybe Not Just Yet, One Study Says

How confident are you that the Fed will get interest rates back down to the 2% target without causing a recession? 

Every month of new data we get that says inflation is gradually coming down, and that policy is working and that the labor market remains solid. It’s still a good labor market. It’s just not as, you know, frothy as it was. That’s what my employers tell me. It feels like employees come, they don’t leave in two weeks to go find the extra dollar at the next place. They stay. They can grow their careers here. So every month of data we get that says those are still our conditions makes it more likely we’ll be able to achieve our goals of bringing inflation down as gently as we can. But I’d never declare victory until we’re really there, so there’s more work to do.

Housing is a big factor keeping inflation high right now. What are you hearing from businesses and communities in the 12th District—and what does this suggest about housing and rent prices?

What I’m hearing now and it’s not just in the 12th District, it’s across the nation… Housing is a problem everywhere. In fact, there’s this really interesting statistic from the Cato Institute that changed the way I understood how imbalanced the housing situation is. It says that 87% of Americans are worried about the cost of housing. They’re worried about their grandkids not being able to live near them because their kids can’t live where they grew up. They’re worried if they’re new homebuyers, worried if they live in rural areas and they can’t afford it. It doesn’t matter where you go, how old you are, what political persuasion you come from—everyone’s worried about housing. And interest rates have stopped the housing prices from appreciating as much as they were, but ultimately, interest rates can’t solve the fundamental problem. The fundamental problem is that we have too little housing and too much demand. Too little supply, too much demand means higher price levels on housing.

Are you concerned about further softening in the labor market, and how AI could impact the labor market in the longer term?

In terms of further softening, as the economy slows, we can expect the labor market to slow accordingly—to slow with the economy. That’s how monetary policy works: Interest rate goes up, the economy slows. As the economy slows, the labor market slows. And we are at a point now where some of the easy wins we got with inflation coming down without much of a disturbance to the labor market [or] the unemployment rate—those benign outcomes could be less in our future, but it’s too early to tell. So I wouldn’t say I’m worried about the labor market, but I’d say I’m definitely watching the labor market.

Now in terms of AI, that’s a different issue. That’s less about the slowing economy and more about technology. The important thing to remember when you think of AI is that technology’s always changing. You know, about five years ago, it was robotic process automation that people were very worried about. Well, obviously, we automated it with those types of technologies and we still have employment. We’ve had technological progress—the computer revolution, the Internet—all of that came and the unemployment rate stayed roughly below 5%. So jobs are being created, as well as jobs being replaced. And so the key is: Which tasks and jobs are being replaced? Which ones are getting augmented, making people more productive? And which ones are being created? The way the economy functions is if we can get people to upskill or reskill to take the jobs that are being created, we’ll have a very successful and growing economy. But that’s the burden on us—to make sure that everyone can participate in this changing technological development.

Read More: How to Make AI Work for You, at Work

You’ve said that companies—predominantly in tech—need to be more actively making workers return to the office in San Francisco. Are the current remote and hybrid arrangements weighing on the economy there and more broadly, in the U.S.?

In the economy in San Francisco or any city where there’s a lot of office vacancy or where people are worried about the vibrancy of the urban corridor…they say tourism is coming back. And that’s really good, but [tourism is] a nighttime business. The day businesses—like the lunch business—are all workforce [dependent]. And of course, to make a vibrant city one that feels like there’s liveliness in it, it is about bringing your employees back to work at least part time. I’m not saying bring them back five days a week. We need to evolve. But I think it’s important for the vibrancy of our cities to have people participate in the vibrancy of our cities.

But here’s something that I think is even more important, and it’s nationwide: we were talking this morning [at the Aspen Ideas Festival] about AI and what it can do well. It can actually displace some of the entry level jobs that people start doing when they’re young and then they build themselves up to take on some more challenging roles. You have to practice getting those skills, human capital experience, but also EQ skills. How do you interact with people? How do you manage people? How do you do these things if we don’t come back to the office sometimes? Then all these young, new, career folks, they are going to have a disadvantage in terms of learning those skills. So we come to work three days a week in San Francisco and in the Fed, and we do it in all our areas because we believe that is our obligation and responsibility to help all generations feel like they can reach their potential down the road. So that’s part of the virtuous cycle that we’re trying to create in our economy. Is it hurting our economy? You can’t see the signs of that today. But a decade from now, when all of our emerging leaders have not had the experience of being there and learning how to lead in person ever, then I think that’s a challenge.

I want to stay on this remote work theme just a little bit longer. Some working parents and those with other caring responsibilities—often women—say that these working arrangements have been greatly beneficial for them. Would forcing a return to in person work disproportionately affect women?

You know, we haven’t found that. So let me give you some examples of how it’s not just that way. Say we said that we’re not going to do all remote work, we’re going to have five days a week, eight hours a day, very prescriptive hours. Well, that’s really hard on lots of people whose lives don’t have that kind of flexibility. They have to sacrifice things. But if you say to people, you come in two, three days a week, and the hours in which you come in, just try to be there in certain hours where your teams are there, try to have focus days, etc., well then you can get some of the proceeds of being in the office and helping advance your career. Women and other minorities in the workplace, especially in leadership roles, they will be disadvantaged if they never participate either. But it is not meant and doesn’t have to be at the expense of having flexibility.

What if our model of the workplace was: Flex for your day, flex for your life. For workers who don’t have to be there every day, being able to flex so that if you have a soccer game or you have a doctor’s appointment for your kid or [have to] care take with a parent or a neighbor, well then you have that flexibility, but you’re still participating. 

And I’m sure you get asked this question a lot. Are you worried about a “doom loop” in San Francisco?

No, I’m not worried about a doom loop. In fact, I think doom loop is something that describes a city and an economy without assets, without energy, and that just has folded up its tent and that’ll be the end. There aren’t many American cities that have that right now and San Francisco is definitely not one of them. There are talented people. There are new businesses coming and putting themselves in the San Francisco area every week. It’s a beautiful area. It’s an area of entrepreneurship and innovation. It’s got a certain freedom to it which allows people to innovate and think outside of the box. And frankly, that’s what we need.

That doesn’t mean we don’t have problems. We have a very large housing shortage. We have some infrastructure in terms of roadways and repairs that need to be taken care of. And we have just the idea that people are still not sure if it’s going to be OK in San Francisco. But tourism’s back. We have people coming in partaking in the beautiful area, and now it’s really about bringing people back, getting them excited. But there are many people who are bullish on San Francisco. I’m one of them and that doesn’t mean it’ll be ready to go back to normal tomorrow. But you know, San Francisco has reinvented itself so many times, even in the time I’ve lived there [since 1996]. We’ve gone through many cycles, and every single time we thought it was the end of San Francisco: the dot com [bubble], then the financial crisis, then the pandemic. It’s never happened yet.

❌