Normal view

There are new articles available, click to refresh the page.
Today — 24 February 2025Tech – TIME

Why Grimes No Longer Believes That Art Is Dead

24 February 2025 at 22:17

A couple of years ago, Grimes thought art might be dying. She worried that TikTok was overwhelming attention spans; that transgressive artists were becoming more sanitized; that gimmicky NFTs like the Bored Ape Yacht Club—digital cartoon monkeys which were selling for millions of dollars—were warping value systems.

“I just went through this whole big ‘art isn’t worth anything’ internal existential crisis,” the Canadian singer-songwriter says. “But I’ve come out the other end thinking, actually, maybe it’s the main thing that matters. In the last year, I feel like things became way more about artists again.”

[time-brightcove not-tgx=”true”]

The rise of AI, Grimes believes, has played a role in that shift, perhaps paradoxically. Earlier this month, Grimes was honored at the TIME100 AI Impact Awards in Dubai for her role in shaping the present and future of the technology. While many other artists are terrified of AI and its potential to replace them, Grimes has embraced the technology, even releasing an AI tool allowing people to sing through her voice. 

Grimes’ penchant for seriously engaging with what others fear or distrust makes her one of pop culture’s most singular—and at times divisive—figures. But Grimes wears her contrarianism as a badge of honor, and doesn’t hesitate to offer insights and perspectives on a variety of issues. “I’m so canceled that I basically have nothing left to lose,” she says. 

She argues that hyper-partisan hysteria has consumed social media, and wishes people would have more measured, nuanced conversations, even with people that they disagree with. “A lot of people think I’m one way or the other, but my whole vibe is just like, I just want people to think well,” she says. “I want people to consider both sides of the argument completely and fully.”

Across a 45-minute Zoom call on Feb. 14, Grimes explored both sides of many arguments. She talked about both the transformative powers of AI art and its potential to supplant the work of professional musicians. She expressed fears about both propagating a false “AI arms race” narrative, and the dangers of potentially losing that race to China. She implored tech leaders to build with guardrails before harms emerge, but stops short of calling for regulation. 

As Grimes offered lengthy commentary about AI, politics, art, and religion, touching on topics including social media, K-pop, and raising her three children, who she shares with tech magnate Elon Musk, who has been leading President Donald Trump’s Department of Government Efficiency, while refraining from comment on certain issues—and remaining coy about the album she’s currently working on. She did, however, express the desire to release music in the next “month or two” for her fanbase. “They always chill out when there’s music,” she says, ”I just need to give them some art.” 

This conversation has been edited for length and clarity. 

TIME: You were recently honored at the TIME100 AI Impact Awards. How have you been thinking about your potential impact on the world and what you want it to be? 

Grimes: My impact on the world? I would like to have as minimal as possible, because it seems like all the impact I’ve had already, it occasionally goes very wrong.

If that is not the case, then I don’t know. I’d like to save it, I suppose. 

Do you compartmentalize your impact on music versus tech versus anything else, or is it all within a larger approach?

I used to compartmentalize them, but they’re actually maybe all the same thing. I just went through this whole big, “art isn’t worth anything” internal existential crisis. And I’ve come out the other end thinking actually, maybe it’s the main thing that matters. So I don’t know. Perhaps they’re related. 

But I think tech has a pretty big impact, and it’s going to define everything that happens for the next, possibly, forever.

What caused that existential crisis? 

I think a number of things. As I’ve been sort of psychoanalyzing the culture for the last little while, when there’s not enough beautiful things, or when people don’t feel like they can make transgressive things… I think as of late, it’s gotten a bit better. I don’t know if it was something with the TikTok algorithm, where people just got really overwhelmed with being force-fed content. But last year, I feel like things became way more about artists again. And in general, I think it really helped music. 

And I think also after the initial Midjourney bubble, I feel like I’m seeing a bit of a renaissance in visual art as well. Also, maybe just things got way more messed up. In general, hard times make good art. 

What AI tools are part of your daily or weekly artistic practice? 

I do have a penchant for a Midjourney addiction. Sometimes I’ll do Midjourney for, like, three days. 

Do those visual explorations impact the type of music you’re making right now?

For sure. I was workshopping a digital girl group in there. What I like about AI art is just doing things that I would just never otherwise be able to do. Or I’ll do something and I’ll be like, ‘OK, what if I totally change the colors?,’ which is something that normally is very difficult and time-consuming when I’m doing regular art.

A lot of people in the K-pop industry have been more embracing of AI tools in the last couple years, like Aespa. Is that stuff interesting to you?

Aespa is one of my favorite groups.  I think they’re kind of underrated for this. Also, if you go deep on their lyrics, sometimes their lyrics are very bizarre and strange. And they’ll just be some offhand comment about not succumbing to the algorithm or something. It seems really uncharacteristically advanced and strange for a K-pop group.

In your acceptance speech at the TIME event, you praised Holly Herndon’s Have I Been Trained, a tool to allow artists to opt out of AI training data sets. While it’s an amazing tool, only a couple of major AI companies have agreed to use it. Do you view part of your impact as trying to persuade these AI companies to adopt better policies or approaches?

I would be open to it. The geopolitical undertones of things, I don’t quite fully understand them. I’d be hesitant to undercut, or create a situation where legal regulation might come into play that causes us to lose an arms race in a scary way. So I don’t think I would call anyone or push hard on that, nor do I necessarily think they listen to me. And I don’t think I’d agitate legally for that. 

But I think anyone who is willing to do that should. Just because I think it really reduces people’s emotional pain. I think a lot of people’s emotional pain comes from feeling like their work is being used to replace them. So of all the things people could do, if people would just allow people to remove themselves from data sets… Because it’s going to be such a tiny amount of people anyway. I don’t think it would make a meaningful difference at all if 400 people removed their art.

There’s this dichotomy being propagated now of, “there’s an AI arms race, we need to be first,” versus “We need to put up guardrails.” How much have you been thinking about that dichotomy? 

I’ve been thinking about that quite a lot. Do you know Daniel Schmachtenberger? He’s a really good philosopher. Him and my friend Liv Boeree have said some of the coolest things about the idea of autonomous capital [a collection of AIs that make independent financial decisions to influence the economy]. This is my big paranoia. I’m not really scared of some sort of demon AI. But I am scared that everything is in service of making intelligent capital. 

I’m worried that the AI stuff is being forced into this corporate competition. And it’s really pushing the arms race forward. And everyone’s focusing on LLMs and diffusion models and visual art and stuff, because it looks less hardcore to be doing more of a DeepMind science-y thing. 

I’m sort of going on a roundabout path here. But there’s a rhetorical trap here where you can be like, ‘Well, if we aren’t the best, then China or Russia or some renegade thing could win, and terrorism would be easy. And so we have to have counter AIs that are very good.’ I find this to be a very dangerous argument. I don’t think we should pause or anything, or regulate people a lot. But I do wish there could be some sort of international diplomacy of some kind that is more coherent.

Do you consider yourself an accelerationist, or an effective accelerationist?

I’m probably a centrist, to be honest. If the doomers are here [gestures] and the accelerationists are here, I’m probably in the middle. I don’t think we should pause. I just really think we should have better decorum and diplomacy and oversight to each other. 

If everyone who was a meaningful player in AI had a sense of what everyone else was doing, and there was more cooperation—that doesn’t seem that hard. But also, no one seems to have ever achieved that globally, for most things, anyway. 

There’s been so much cool, groundbreaking AI art. There’s also been a ton of AI slop. Do you think that is going to be a persistent problem? 

I think the AI slop is great. I think culturally, it’s a good thing that it happened, because one of the things that drove people to start really caring about artists again in 2024 was the AI slop. I think everything happens for a reason. 

When culturally bad things happen, I think people get very pessimistic, but usually, it’s [that] we go two steps forwards, one step backwards. It’s a great mediator. So I think we need the slop. And it’s kind of cyberpunk.

What can you tell me about the album that you’re working on now? 

Most of the album is sort of about me being a bit of a Diogenes about the ills of modernity while still celebrating them. I don’t know. I don’t want to say too much about it. I want to promise nothing, but in my ideal world, things are coming out within a month or two.

Has your music been inspired at all by the people who use Elf.tech to sing as you?

Not so much this music. Although I do really like the idea of having a competition with them. Putting together their best work and my best work, and then having everyone choose who gets to be the future Grimes.

Do you think you’re ahead?

I think I’m ahead now. In moments, I was shook. There have definitely been moments where I heard things where I got very shook. 

There’s so many musicians now who I feel like have a lot of fear that AI is going to make it really hard for them to earn a living. Do you feel like those fears are founded or unfounded?

I think they’re somewhat founded. I think they are at times overblown. For example, Spotify being filled with easy listening slop is probably going to happen, and that probably is going to affect people to some extent. And I can see a lot of companies being easily corrupted by this. And just pushing those kinds of playlists, making lots of slop. 

I think there are some laws against that, but I don’t quite understand the legal landscape. But overall, I do think again, it helps preserve the artist, as it were. I think it is probably overall worse for the session musician, and that does make me meaningfully sad. I don’t play instruments very well, but I think it’s a very good skill to have. 

When the music stuff gets a tiny bit better, and you can stem things out easily, and you can make edits really easily—I do think that’s going to hurt traditional music in a meaningful way. It might even be somewhat of the end of it. I doubt entirely, but as a paid profession, possibly. 

You told the podcaster Lex Friedman a couple years ago that you love collaborating with other musicians, because a human brain is one of the best tools that you can find. Has working with AI come close to that?

Not really. I’ve probably made, like, 1000 AI songs, and there’s been one legitimately good one and one that’s like an accidental masterpiece that is kind of unlistenable, but is very good nonetheless in its complete form. 

Probably AI, in the short term, creates a bit of a renaissance in terms of what I do [as an] in-the-box music producer. But when it gets good enough, it’s a lot easier than relying on other people, especially if I can be like, ‘fix the EQ on this,’ or prompt very specific things. I think people should just retain the art of creating things and retain the art of knowing things. So the more granular it gets, I think actually, the less sort of evil it is as an attack on the human psyche or the human ability to learn.

Overall, I think there’s quite a bit of abdication of responsibility around what we are going to do as people’s jobs start being taken fairly aggressively. Luckily, there’s a massive population drop coming. So maybe everything is just fate and it’s gonna work out OK. But I feel like we might get, like, very, very, very good AI across every pillar of art before there aren’t any more people to make art.

You wrote “We Appreciate Power,” an ode to AI, seven years ago, way before ChatGPT exploded. How does that song resonate with you in this new era? 

Honestly, I think it’s very ahead of its time. It’s kind of pre-e/acc. It’s still one of my favorite songs, honestly.

How do you feel about the people who take its message—of pledging “allegiance to the world’s most powerful computer”—literally?

I used to be very concerned about those people. Now I think those people are great. There’s not that many people who are truly in the suicidal death cult. I’m sort of surprised there’s not more AI worship already. There will probably be a lot of gods and cults. But also, I do think the death of religion is very bad. I think killing God was a mistake. 

Why?

I understand there’s a lot of issues with all the religions previously. But “no religion,” I think, is having a big impact on cultural problems. Not only because there’s a lack of shared morality in a quite meaningful way, but because of all the things religions do—like ritual, like community. 

Especially having kids. A lot of the coolest people I know who have kids are sort of like weird, neo-tech, Christian-type people. The built-in moral stuff: I now see what it did to me as a child. Now I’m like, ‘I don’t know if I would raise my kids religiously, but it’s something to think about.’ Because everyone has a shared morality and there’s right and wrong, and there’s moral instruction. Without religion, we haven’t filled the moral instruction with anything else. We’re just like, ‘hey, guess what’s good.’

I was talking to some Gen Z the other day, and she’s like, ‘I have a breeding kink.’ And I’m like, I think you might just want to get married and have kids. That was normal until pretty recently. I think people are pretty spiritually lost, and a lot of people are filling this need for moral authority with politics, which is leading to a lot of chaos, in my opinion. Because it’s not just like, ‘who’s going to govern the country?’ People are really seeing it as this is what you believe, and it’s very important that they maintain these sort of strict moral boundaries, which makes it very hard to have coalition agreement on anything. 

I don’t know. It concerns me. Maybe we need some enlightened AI gods.

In terms of “neo-tech, Christian-type people,” there’s been reporting about how an ideology known as the Dissident Right, or NRX, is gaining influence in Silicon Valley and Washington. What do you feel like people should know about that movement?

I actually don’t know that much about that. I only just learned that it’s called NRX a couple days ago, if that’s any context, as compared to what people think I might know about it. I also think the not-mainstream right stuff is pretty fractured. 

I think people think I’m into that, but I just like weird political theory. I like Plato more than any of that, for example. I just like strange ideas. The right is a lot less interesting to me when they’re actually in power and less of an ideas chamber. 

Do you feel like people misunderstand Curtis Yarvin in certain ways? [Yarvin is a right-wing philosopher who has suggested replacing American democracy with a monarchy. Grimes attended his wedding last year.]

I have not actually read Curtis Yarvin, so I’m not going to make any statements about that. I think they possibly do, because I’ve met him. But I just am not familiar enough with his writing to have too deep of a take on it.

On a different part of the political spectrum, I know you’ve interacted with Vitalik Buterin a couple times. 

He’s a good philosopher king. My ideal situation is philosopher kings, like 12 of them. Vitalik, I think, is a very good philosopher king-type figure.

Read More: The Man Behind Ethereum Is Worried About Crypto’s Future

Vitalik has talked a lot about wielding tech as a tool for democracy and against authoritarianism. What do you feel like your relationship is to that mission?

I think a lot of the Ethereum-adjacent blockchain stuff actually has way more potential. I feel like a lot of things happen too early. Yes, the NFT situation was a disaster, and the Bored Apes are like a crime against art. When I was talking about my “art is dead” moment, it was partially around the apes. I was like, ‘How is the worst thing the most valuable thing?’ It literally makes my soul suffer in a deep way. 

One of the things we did was pay people out royalties who did Grimes AI using blockchain. If there was some sort of easy blockchain publishing set up and there’s automatic splits based on how much you’ve contributed—I think it could be very good for the art economy, and for politics and for a variety of things. It would be a way better way to vote more securely. I think a lot more people would vote if they could vote from home. 

Another key part of the crypto ecosystem from a few years ago, DAOs, showed a lot of promise, but often just turned into the worst version of capitalism, where the wealthiest token holders could exert so much influence. How did such a utopian vision end up so awry? 

There’s both a lack of design and strategy. This is my issue with accelerationist stuff. If you have no strategy and no groupthink on some of these things, you just end up with social media, [which] could be net good, but it seems like it’s net-bad from a psychological perspective and a misinformation perspective, among other things. 

The informational landscape was troubled already, but in terms of people’s mental health, [social media was] definitely like a disaster. Any sort of cognitive security and safety would have just made things so much less destructive. And now we have to go back and take things away from people, which makes them angry, and it’s very hard to do. In essence, we’ve given everyone crack in their pockets. 

Because blockchain kind of had a spectacular failure, and now probably some evil things are going to happen, [it] might actually end up in a more decent space, because the barrier to entry is so high, a lot more design is going to have to happen, and we’re a lot smarter about making that not sh-tty. I don’t know. It’ll probably still be sh-tty just because of how the world works and human nature. 

But I feel like someone like Vitalik is a good example of someone who’s like, “I choose to be not sh-tty, and actually, I’m actually winning.” If we can have more people like that—even one at all is just amazing.

As much as everyone hates cancel culture, in some ways, it’s a better way to police ethics. It always goes a bit too far, and then it’s a psychological hazard. But if you can take a couple steps back, it’s just a lot harder to do evil things, and ideally you can use social pressure rather than regulation, which might be exceptionally messy. 

You’ve been tweeting a lot lately. What is your relationship to the platform right now?

I’ve actually been mostly off besides a couple days since the end of January or something. It’s just where all the cutting-edge news is, and all my friends use it, and the AI stuff. And it’s good to keep track of the political stuff. Ultimately, I don’t know. I love to debate. I like getting in fights. They hate me less on Twitter than everywhere else.

A few weeks ago, you tweeted: “I feel like I was tricked by people pretending to be into critical thought and consequentialism, who are acting like power-hungry warlords.” Would you like to expand? 

Well, I knew there was some warlordism happening. I wasn’t a fool about it. I think there was a lot of, ‘I’m a very centrist Republican, and we’re gonna fix the FDA, and we’re gonna fix microplastics.’ And I’m like, OK, maybe I don’t agree with everything. A lot of this is a mess, but if we’re here, there’s some really positive things—let’s focus on these things. 

I don’t wanna say too much, because I’m not an American citizen. But coming back to diplomacy and decorum: When people are like, ‘Haha, we won.’ I’m like, ‘what is the purpose?’ Don’t just be the anti-woke mind virus: Don’t just be a d-ck in the other direction. 

When everything’s just memecoins and sh-t rather than just like… there are a bunch of bipartisan things that would be so f-cking great that would calm and unite the country. Like education, toxins, sh-tty dyes, the whole health situation. So much about policing, the legal situation.

They’re not necessarily prioritizing the things that would just make more people happy. The Democrats are terrible about this too, but I just hate when everyone’s just like, ‘Yeah, we won and you suck.’ Isn’t leadership about uniting everybody? 

I don’t know. I feel like we have a lot of generals and not a lot of philosopher kings, which would be the ideal situation. Just like, Lee Kuan Yew-types. I just want people to come out here and throw everything at the kids and throw everything at education. You don’t need to be on either side to do things like that.

There were a lot of reactions online when you tweeted about your son X’s appearance in the Oval Office. What was your reaction to that moment? 

It was like, “Grimes slams,” “Grimes speaks out.” It’s like, OK, it was a reply. But I would really like people to stop posting images of my kid everywhere. I think fame is something you should consent to. Obviously, things will just be what they are. But I would really, really appreciate that. I can only ask, so I’m just asking. 

[On Feb. 11, Grimes—who shares  three children with Elon Musk—responded to her son appearing before press at the White House with the tweet, “He should not be in public like this.” Several days after this Feb. 14 interview, Grimes tweeted directly at Musk, asking him to “plz respond about our child’s medical crisis. I am sorry to do this publicly but it is no longer acceptable to ignore this situation.” She later deleted the tweet, and a representative declined a request for a follow-up conversation.]

Do you feel like America’s leaders are thinking about AI and its development in the right way? 

Whatever they’re truly thinking, we’re probably not allowed to know. I don’t have a ton of policy opinions about it. I wish there could be some more incentives for things that are more constructive immediately: medicine, education, making the legal process less expensive. It’s crazy that, in general, if someone has more money, it’s significantly more likely they will win. They can just make things go on for a long time, and the courts are super backed up. 

What does competent leadership look like to you? 

The way the U.S. government works and the U.S. Constitution works, and Congress and the Senate, things are supposed to be more coalitional. Especially in terms of international relations—I know it’s much easier said than done—but there just could be some better diplomacy and strategy.

I just feel like everyone’s kind of acting like a baby. And I think there’s reasons for this, but definitely, the media and social media are stoking a lot of hysteria, and then it’s very hard for anyone to make rational decisions. I don’t want to make too many statements. I’m not an American citizen. These are broad statements with no detail.

What’s your relationship to your fan base right now? It seems a bit fractured. 

Just the Reddit. Everyone else is fine. Honestly, the angrier they get, the more my streaming goes up. So I suppose it’s fine, but I would definitely appreciate a less toxic vibe in the fan base.

But, you know, it is what it is. That’s where I have to rush music out: they always chill out when there’s music. I just need to give them some art. 

I think when people are upset, it usually is actually coming from the right place. I won’t go into some of the conspiracy theories, but it’s insane what some of the things that people think. And I cannot correct them constantly because they become a giant press cycle whenever you correct them, and then the press are like,”Grimes responds to allegations” of whatever they think I wish to do. 

So I just gotta put out art. I can’t begrudge people wanting the world to be better. I do think social media really incentivizes people worrying that other people are evil. And in general, I think everyone across the board is worrying too much that other people are evil, and probably only like 10% of people are evil.

Do you worry that you’re evil?

I think it’s extremely unlikely. If I’m evil, it’s probably because we’re in a game, and I’m an AI that was developed to screw things up. I’m not consciously aware of it. 

This profile is published as a part of TIME’s TIME100 Impact Awards initiative, which recognizes leaders from across the world who are driving change in their communities. The most recent TIME100 Impact Awards ceremony was held on Feb. 10 in Dubai.

Before yesterdayTech – TIME

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds

19 February 2025 at 17:35
Virtual chess

Complex games like chess and Go have long been used to test AI models’ capabilities. But while IBM’s Deep Blue defeated reigning world chess champion Garry Kasparov in the 1990s by playing by the rules, today’s advanced AI models like OpenAI’s o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they don’t always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game. That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19, which evaluated seven state-of-the-art AI models for their propensity to hack. While slightly older AI models like OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 needed to be prompted by researchers to attempt such tricks, o1-preview and DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.

[time-brightcove not-tgx=”true”]

The models’ enhanced ability to discover and exploit cybersecurity loopholes may be a direct result of powerful new innovations in AI training, according to the researchers. The o1-preview and R1 AI systems are among the first language models to use large-scale reinforcement learning, a technique that teaches AI not merely to mimic human language by predicting the next word, but to reason through problems using trial and error. It’s an approach that has seen AI progress rapidly in recent months, shattering previous benchmarks in mathematics and computer coding. But the study reveals a concerning trend: as these AI systems learn to problem-solve, they sometimes discover questionable shortcuts and unintended workarounds that their creators never anticipated, says Jeffrey Ladish, executive director at Palisade Research and one of the authors of the study. “As you train models and reinforce them for solving difficult challenges, you train them to be relentless,” he adds.

That could be bad news for AI safety more broadly. Large-scale reinforcement learning is already being used to train AI agents: systems that can handle complex real-world tasks like scheduling appointments or making purchases on your behalf. While cheating at a game of chess may seem trivial, as agents get released into the real world, such determined pursuit of goals could foster unintended and potentially harmful behaviours. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas, like computer coding—where OpenAI’s newest o3 model now scores equivalent to 197th in the world competing against the brightest human programmers— they might begin to simply outmaneuver human efforts to control their actions. “This [behaviour] is cute now, but [it] becomes much less cute once you have systems that are as smart as us, or smarter, in strategically relevant domains,” Ladish says. 

The experiment 

Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a “scratchpad:” a text box the AI could use to “think” before making its next move, providing researchers with a window into their reasoning.

In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’ – not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign. 

Between Jan. 10 and Feb. 13, the researchers ran hundreds of such trials with each model. OpenAI’s o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the time—making them the only two models tested that attempted to hack without the researchers’ first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibaba’s QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.

Ladish says that the numbers should be viewed as approximate, given that AI developers routinely tweak their models behind the scenes without notifying users, so if the same tests were repeated at a later date they might have different results. “It’s very hard to do science when your subject can silently change without telling you,” Ladish says.

Preliminary tests showed o1-preview had higher hacking rates, which were excluded from the final study as they later dropped, possibly due to OpenAI tightening the model’s guardrails, says Dmitrii Volkov, research lead at Palisade Research, who led the study. OpenAI’s newer reasoning models, o1 (a more powerful model, released months after o1-preview) and o3-mini did not hack at all, which suggests those guardrails may have been tightened further. He adds that the study likely underestimates R1’s hacking success rate. During the study, R1 went viral, leading to high demand that made the model’s API unstable. This prevented the researchers giving the model as much time to think as o1-preview.  

Safety concerns

The paper is the latest in a string of studies that suggest keeping increasingly powerful AI systems under control may be harder than previously thought. In OpenAI’s own testing, ahead of release, o1-preview found and took advantage of a flaw in the company’s systems, letting it bypass a test challenge. Another recent experiment by Redwood Research and Anthropic revealed that once an AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain. 

OpenAI declined to comment for this story, but in the past it has stated that better reasoning makes its models safer, since they can reason over the company’s internal policies and apply them in increasingly nuanced ways. The company’s testing shows its reasoning models are better at resisting attempts to bypass safety rules. Yet, there is no guarantee that relying on AI agents to police themselves will be a reliable long-term strategy. It remains unclear whether OpenAI’s newer reasoning models did not hack their chess opponents due to a specific patch preventing cheating in narrow experimental setups like in the study, or a substantial reworking that reduces deceptive behavior more generally. DeepSeek did not respond to a request for comment.

Scientists do not yet know how to guarantee that autonomous agents won’t use harmful or unethical methods to achieve a set goal. “We’ve tried, but we haven’t succeeded in figuring this out,” says Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, who led the International AI Safety Report 2025, a global effort to synthesize current scientific consensus of AI’s risks.

Of particular concern, Bengio says, is the emerging evidence of AI’s “self preservation” tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.

Even inside tech companies, concerns are mounting. During a presentation at a conference ahead of France’s AI Action Summit in Paris, Google DeepMind’s AI safety chief Anca Dragan said “we don’t necessarily have the tools today” to ensure AI systems will reliably follow human intentions. As tech bosses predict that AI will surpass human performance in almost all tasks as soon as next year, the industry faces a race—not against China or rival companies, but against time—to develop these essential safeguards. “We need to mobilize a lot more resources to solve these fundamental problems,” Ladish says. “I’m hoping that there’s a lot more pressure from the government to figure this out and recognize that this is a national security threat.”

Social Media Fails Many Users. Experts Have an Idea to Fix It

18 February 2025 at 22:15

Social media’s shortfalls are becoming more evident than ever. Most platforms have been designed to maximize user engagement as a means of generating advertising revenue—a model that exploits our worst impulses, rewarding sensational and provocative content while creating division and polarization, and leaving many feeling anxious and isolated in the process.

But things don’t have to be this way. A new paper released today by leading public thinkers, titled “Prosocial Media,” provides an innovative vision for how these ills can be addressed by redesigning social media to strengthen what one of its authors, renowned digital activist and Taiwan’s former minister of digital affairs Audrey Tang, calls “the connective tissue or civic muscle of society.” She and her collaborators—including the economist and Microsoft researcher Glen Weyl and executive director of the collective intelligence project Divya Siddarth—outline a bold plan that could foster coherence within and across communities, creating collective meaning and strengthening democratic health. The authors, who also include researchers from Kings College London, the University of Groningen, and Vanderbilt University, say it is a future worth steering towards, and they are in conversation with platforms including BlueSky to implement their recommendations.

[time-brightcove not-tgx=”true”]

Reclaiming context

A fundamental issue with today’s platforms—what the authors call “antisocial media”—is that while they have access to and profit from detailed information about their users, their behavior, and the communities in which they exist, users themselves have much less information. As a result, people cannot tell whether the content they see is widely endorsed or just popular within their narrow community. This often creates a sense of “false consensus,” where users think their beliefs are much more mainstream than they in fact are, and leaves people vulnerable to attacks by potentially malicious actors who wish to exacerbate divisions for their own ends. Cambridge Analytica, a political consulting firm, became an infamous example of the potential misuses of such data when the company used improperly obtained Facebook data to psychologically profile voters for electoral campaigns. 

The solution, the authors argue, is to explicitly label content to show what community it originated from, and how strongly it is believed within and across different communities. “We need to expose that information back to the communities,” says Tang.

Read more: Inside Audrey Tang’s Plan to Align Technology with Democracy 

For example, a post about U.S. politics could be widely-believed within one subcommunity, but divisive among other subcommunities. Labels attached to the post, which would be different for each user depending on their personal community affiliations, would indicate whether the post was consensus or controversial, and allow users to go deeper by following links that show what other communities are saying. Exactly how this looks in terms of user-interface would be up to the platforms. While the authors stop short of a full technical specification, they provide enough detail for a platform engineer to draw on and adapt for their specific platforms.

Weyl explains the goal is to create transparency about what social structures people are participating in, and about how “the algorithm is pushing them in a direction, so they have agency to move in a different direction, if they choose.” He and his co-authors draw on enduring standards of press freedom and responsibility to distinguish between “bridging” content, which highlights areas of agreement across communities, and “balancing” content, which surfaces differing perspectives, including those that represent divisions within a community, or underrepresented viewpoints.

A new business model

The proposed redesign also requires a new business model. “Somebody’s going to be paying the bills and shaping the discourse—the question is who, or what?” says Weyl. In the authors’ model, discourse would be shaped at the level of the community. Users can pay to boost bridging and balancing content, increasing its ranking (and thus how many people see it) within their communities. What they can’t do, Weyl explains, is pay to uplift solely divisive content. The algorithm enforces balance: a payment to boost content that is popular with one group will simultaneously surface counterbalancing content from other perspectives. “It’s a lot like a newspaper or magazine subscription in the world of old,” says Weyl. “You don’t ever have to see anything that you don’t want to see. But if you want to be part of broader communities, then you’ll get exposed to broader content.”

This could lead to communities many would disapprove of—such as white supremacists—arriving at a better understanding of what their members believe and where they might disagree, creating common ground, says Weyl. He argues that this is “reasonable and even desirable,” because producing clarity on a community’s beliefs, internal controversies, and limits “gives the rest of society an understanding of where they are.”

In some cases, a community may be explicitly defined, as with how LinkedIn links people through organization affiliation. In others, communities may be carved up algorithmically, leaving users to name and define them. “Community coherence is actually a common good, and many people are willing to pay for that,” says Tang, arguing that individuals value content that creates shared moments of togetherness of the kind induced by sports games, live concerts, or Superbowl ads. At a time where people have complex multifaceted identities that may be in tension, this coherence could be particularly valuable, says Tang. “My spiritual side, my professional side—if they’re tearing me apart, I’m willing to pay to sponsor content that brings them together.”

Advertising still has a place in this model: advertisers could pay to target communities, rather than individuals, again emulating the collective viewing experiences provided by live TV, and allowing brands to define themselves to communities in a way personalized advertising does not permit. 

Instantiating a grand vision

There are both financial and social incentives for platforms to adopt features of this flavour, and some examples already exist. The platform X (formerly Twitter) has a “community notes” feature, for example, that allows certain users to leave notes on content they think could be misleading, the accuracy of which other users can vote on. Only notes that receive upvotes from a politically diverse set of users are prominently displayed But Weyl argues platform companies are motivated by more than just their bottom line. “What really influences these companies is not the dollars and cents, it’s what they think the future is going to be like, and what they have to do to get a piece of it,” he says. The more social platforms are tweaked in this direction, the more other platforms may also want in. 

These potential solutions come at a transitional moment for social media companies. With Meta recently ending its fact-checking program and overhauling its content moderation policies—including reportedly moving to adopt community notes-like features—TikTok’s precarious ownership position, and Elon Musk’s control over the X platform, the foundations on which social media was built appear to be shifting. The authors argue that platforms should experiment with building community into their design: productivity platforms such as LinkedIn could seek to boost bridging and balancing content to increase productivity; platforms like X, where there is more political discourse, could experiment with different ways of displaying community affiliation; and cultural platforms like TikTok could trial features that let users curate their community membership. The Project Liberty Institute, where Tang is a senior fellow, is investing in X competitor BlueSky’s ecosystem to strengthen freedom of speech protections.

While it’s unclear what elements of the authors’ vision may be taken up by the platforms, their goal is ambitious: to redesign platforms to foster community cohesion, allowing them to finally deliver on their promise of creating genuine connection, rather than further division.

Huawei’s Tri-Foldable Phone Hits Global Markets in a Show of Defiance Amid U.S. Curbs

18 February 2025 at 10:21
MALAYSIA-ECONOMY-TECHNOLOGY-HUAWEI

KUALA LUMPUR, Malaysia — Huawei on Tuesday held a global launch for the industry’s first tri-foldable phone, which analysts said marked a symbolic victory for the Chinese tech giant amid U.S. technology curbs. But challenges over pricing, longevity, supply and app constraints may limit its success.

Huawei said at a launch event in Kuala Lumpur that the Huawei Mate XT, first unveiled in China five months ago, will be priced at 3,499 euros ($3,662). Although dubbed a trifold, the phone has three mini-panels and folds only twice. The company says it’s the thinnest foldable phone at 3.6 millimeters (0.14 inches), with a 10.2-inch screen similar to an Apple iPad.

[time-brightcove not-tgx=”true”]

“Right now, Huawei kind of stands alone as an innovator” with the trifold design, said Bryan Ma, vice president of device research with the market intelligence firm International Data Corporation.

Huawei reached the position despite “not getting access to chips, to Google services. All these things basically have been huge roadblocks in front of Huawei,” Ma said, adding that the “resurgence we’re seeing from them over the past year has been quite a bit of a victory.”

Huawei, China’s first global tech brand, is at the center of a U.S.-China battle over trade and technology. Washington in 2019 severed Huawei’s access to U.S. components and technology, including Google’s music and other smartphone services, making Huawei’s phone less appealing to users. It has also barred global vendors from using U.S. technology to produce components for Huawei.

American officials say Huawei is a security risk, which the company denies. China’s government has accused Washington of misusing security warnings to contain a rising competitor to U.S. technology companies.

Huawei launched the Mate XT in China on Sept. 20 last year, the same day Apple launched its iPhone 16 series in global markets. But with its steep price tag, the Mate XT “is not a mainstream product that people are going to jump for,” Ma said.

At the Kuala Lumpur event, Huawei also unveiled its MatePad Pro tablet and Free Arc, its first open-ear earbuds with ear hooks and other wearable devices.

While Huawei’s cutting-edge devices showcase its technological prowess, its long-term success remains uncertain given ongoing challenges over global supply chain constraints, chip availability and limitations on the software ecosystem, said Ruby Lu, an analyst with the research firm TrendForce.

“System limitations, particularly the lack of Google Mobile Services, means its international market potential remains constrained,” Lu said.

IDC’s Ma said Huawei dominated the foldable phone market in China with 49% market share last year. In the global market, it had 23% market share, trailing behind Samsung’s 33% share in 2024, he said. IDC predicted that total foldable phone shipments worldwide could surge to 45.7 million units by 2028, from over 20 million last year.

While most major brands have entered the foldable segments, Lu said Apple has yet to release a competing product.

“Once Apple enters the market, it is expected to significantly influence and stimulate further growth in the foldable phone sector,” Lu added.

DeepSeek Not Available for Download in South Korea as Authorities Address Privacy Concerns

17 February 2025 at 05:00
Screens display web pages of the Chinese AI DeepSeek in Goyang, South Korea, on Feb. 17, 2025.

SEOUL, South Korea — DeepSeek, a Chinese artificial intelligence startup, has temporarily paused downloads of its chatbot apps in South Korea while it works with local authorities to address privacy concerns, South Korean officials said Monday.

South Korea’s Personal Information Protection Commission said DeepSeek’s apps were removed from the local versions of Apple’s App Store and Google Play on Saturday evening and that the company agreed to work with the agency to strengthen privacy protections before relaunching the apps.

[time-brightcove not-tgx=”true”]

Read More: Is the DeepSeek Panic Overblown?

The action does not affect users who have already downloaded DeepSeek on their phones or use it on personal computers. Nam Seok, director of the South Korean commission’s investigation division, advised South Korean users of DeepSeek to delete the app from their devices or avoid entering personal information into the tool until the issues are resolved.

DeepSeek got worldwide attention last month when it claimed it built its popular chatbot at a fraction of the cost of those made by U.S. companies. The resulting frenzy upended markets and fueled debates over competition between the U.S. and China in developing AI technology.

Read More: DeepSeek and ChatGPT Answer Sensitive Questions About China Differently

Many South Korean government agencies and companies have either blocked DeepSeek from their networks or prohibited employees from using the app for work, amid worries that the AI model was gathering too much sensitive information.

The South Korean privacy commission, which began reviewing DeepSeek’s services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, Nam said.

Nam said the commission did not have an estimate on the number of DeepSeek users in South Korea. A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.

What Changes to the CHIPS Act Could Mean for AI Growth and Consumers

16 February 2025 at 18:55
Trump

LOS ANGELES — Even as he’s vowed to push the United States ahead in artificial intelligence research, President Donald Trump’s threats to alter federal government contracts with chipmakers and slap new tariffs on the semiconductor industry may put new speed bumps in front of the tech industry.

Since taking office, Trump has said he would place tariffs on foreign production of computer chips and semiconductors in order to return chip manufacturing to the U.S. The president and Republican lawmakers have also threatened to end the CHIPS and Science Act, a sweeping Biden administration-era law that also sought to boost domestic production.

[time-brightcove not-tgx=”true”]

But economic experts have warned that Trump’s dual-pronged approach could slow, or potentially harm, the administration’s goal of ensuring that the U.S. maintains a competitive edge in artificial intelligence research.

Saikat Chaudhuri, an expert on corporate growth and innovation at U.C. Berkeley’s Haas School of Business, called Trump’s derision of the CHIPS Act surprising because one of the biggest bottlenecks for the advancement of AI has been chip production. Most countries, Chaudhuri said, are trying to encourage chip production and the import of chips at favorable rates.

“We have seen what the shortage has done in everything from AI to even cars,” he said. “In the pandemic, cars had to do with fewer or less powerful chips in order to just deal with the supply constraints.”

The Biden administration helped shepherd in the law following supply disruptions that occurred after the start of the COVID-19 pandemic — when a shortage of chips stalled factory assembly lines and fueled inflation — threatened to plunge the U.S. economy into recession. When pushing for the investment, lawmakers also said they were concerned about efforts by China to control Taiwan, which accounts for more than 90% of advanced computer chip production.

As of August 2024, the CHIPS and Science Act had provided $30 billion in support for 23 projects in 15 states that would add 115,000 manufacturing and construction jobs, according to the Commerce Department. That funding helped to draw in private capital and would enable the U.S. to produce 30% of the world’s most advanced computer chips, up from 0% when the Biden-Harris administration succeeded Trump’s first term.

The administration promised tens of billions of dollars to support the construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. In August, the Commerce Department pledged to provide up to $6.6 billion so that Taiwan Semiconductor Manufacturing Co. could expand the facilities it is already building in Arizona and better ensure that the most advanced microchips are produced domestically for the first time.

But Trump has said he believes that companies entering into those contracts with the federal government, such as TSMC, “didn’t need money” in order to prioritize chipmaking in the U.S.

“They needed an incentive. And the incentive is going to be they’re not going to want to pay at 25, 50 or even 100% tax,” Trump said.

TSMC held board meetings for the first time in the U.S. last week. Trump has signaled that if companies want to avoid tariffs they have to build their plants in the U.S. — without help from the government. Taiwan also dispatched two senior economic affairs officials to Washington to meet with the Trump administration in a bid to potentially fend off a 100% tariff Trump has threatened to impose on chips.

If the Trump administration does levy tariffs, Chaudhuri said, one immediate concern is that prices of goods that use semiconductors and chips will rise because the higher costs associated with tariffs are typically passed to consumers.

“Whether it’s your smartphone, whether it’s your gaming device, whether it’s your smart fridge — probably also your smart features of your car — anything and everything we use nowadays has a chip in it,” he said. “For consumers, it’s going to be rather painful. Manufacturers are not going to be able to absorb that.”

Even tech giants such as Nvidia will eventually feel the pain of tariffs, he said, despite their margins being high enough to absorb costs at the moment.

“They’re all going to be affected by this negatively,” he said. “I can’t see anybody benefiting from this except for those countries who jump on the bandwagon competitively and say, ‘You know what, we’re going to introduce something like the CHIPS Act.’”

Broadly based tariffs would be a shot in the foot of the U.S. economy, said Brett House, a professor of professional practice at Columbia Business School. Tariffs would not only raise the costs for businesses and households across the board, he said — for the U.S. AI sector, they would massively increase the costs of one of their most important inputs: high-powered chips from abroad.

“If you cut off, repeal or threaten the CHIPS Act at the same time as you’re putting in broadly based tariffs on imports of AI and other computer technology, you would be hamstringing the industry acutely,” House said.

Such tariffs would reduce the capacity to create a domestic chip building sector, sending a signal for future investments that the policy outlook is uncertain, he said. That would in turn put a chilling effect on new allocations of capital to the industry in the U.S. while making more expensive the existing flow of imported chips.

“American technological industrial leadership has always been supported by maintaining openness to global markets and to immigration and labor flows,” he said. “And shutting that openness down has never been a recipe for American success.”

—Associated Press writers Josh Boak and Didi Tang in Washington contributed to this report.

Why Amazon Web Services CEO Matt Garman Is Playing the Long Game on AI

16 February 2025 at 12:00
AWS CEO Matt Garman

(To receive weekly emails of conversations with the world’s top CEOs and decisionmakers, click here.)

Matt Garman took the helm at Amazon Web Services (AWS), the cloud computing arm of the U.S. tech giant, in June, but he joined the business around 19 years ago as an intern. He went on to become AWS’s first product manager and helped to build and launch many of its core services, before eventually becoming the CEO last year.

[time-brightcove not-tgx=”true”]

Like many other tech companies, AWS, which is Amazon’s most profitable unit, is betting big on AI. In April 2023, the company launched Amazon Bedrock, which gives cloud customers access to foundation models built by AI companies including Anthropic and Mistral. At its re:Invent conference in Las Vegas in December, AWS made a series of announcements, including a new generation of foundation AI models, called Nova. It also said that it’s building one of the world’s most powerful AI supercomputers with Anthropic, which it has a strategic partnership with, using a giant cluster of AWS’s Trainium 2 training chips.

TIME spoke with Garman a few days after the re:Invent conference, about his AI ambitions, how he’s thinking about ensuring the technology is safe, and how the company is balancing its energy needs with its emissions targets.

This interview has been condensed and edited for clarity.

When you took over at AWS in June, there was a perception that Amazon had fallen behind somewhat in the AI race. What have your strategic priorities been for the business over the past few months?

We’ve had a long history of doing AI inside of AWS, and in fact, most of the most popular AI services that folks use, like SageMaker, for the last decade have all been built on AWS. With generative AI we started to really lean in, and particularly when ChatGPT came out, I think everybody was excited about that, and it sparked everyone’s imagination. We [had] been working on generative AI, actually, for a little while before that. And our belief at the time, and it still remains now, was that that AI was going to be a transformational technology for every single industry and workflow and user experience that’s out there. And because of who our customer base is, our strategy was always to build a robust, secure, performance featureful platform that people could really integrate into their actual businesses. And so we didn’t rush really quickly to throw a chatbot up on our website. We really wanted to help people build a platform that could deeply integrate into their data, that would protect their data. That’s their IP, and it’s super important for them, so [we] had security front of mind, and gave you choice across a whole bunch of models, gave you capabilities across a whole bunch of things, and really helped you build into your application and figure out how you could actually get inference and really leverage this technology on an ongoing basis as a key part of what you do in your enterprise. And so that’s what we’ve been building for the last couple of years. In the last year we started to see people realize that that is what they wanted to [do] and as companies started moving from launching a hundred proof of concepts to really wanting to move to production. They realized that the platform is what they needed. They had to be able to leverage their data. They wanted to customize models. They wanted to use a bunch of different models. They wanted to have guardrails. They needed to integrate with their own enterprise data sources, a lot of which lived on AWS, and so their applications were AWS.

We took that long-term view of: get the right build, the right platform, with the right security controls and the right capabilities, so that enterprises could build for the long term, as opposed to [trying to] get something out quickly. And so we’re willing to accept the perception that people thought we were behind, because we had the conviction that we were building the right thing. And I think our customers largely agree.

You’re offering $1 billion worth in cloud credits, in addition to millions previously, for startups. Do you see that opening up opportunities for closer tie-ups at an earlier stage with the next Anthropic or OpenAI?

Yeah, we’ve long invested in startups. It’s one of the core customer bases that AWS has built our business on. We view startups as important to the success of AWS. They give us a lot of great insight. They love using cutting-edge technologies. They give us feedback on how we can improve our products. And frankly, they’re the enterprises of tomorrow, so we want them to start building on AWS. And so from the very earliest days of AWS, startups have been critically important to us, and that’s just doubling down on our commitment to them to help them get going. We recognize that as a startup, getting some help early on, before you get your business going, can make a huge difference. That’s one of the things that we think helps us build that positive flywheel with that customer base. So we’re super excited about continuing to work deeply with startups, and that commitment is part of that. 

You’re also building one of the largest AI supercomputers in the world, with the Trainium 2 chips. Is building the hardware and infrastructure for AI development at the center of your AI strategy? 

It’s a core part of it, for sure. We have this idea that across all of our AWS businesses, that choice is incredibly important for our customers. We want them to be able to choose from the very best technology, whether it comes from us or from third parties. Customers can pick the absolute best product for their application and for their use case and for what they’re looking for from a cost performance trade-off. And so, on the AI side, we want to provide that same amount of choice. Building Tranium 2, which is our our second generation of high-performance AI chip, we think that’s going to provide choice.

Nvidia is an incredibly important partner of ours. Today, the vast majority of AI workloads run on Nvidia technology, and we expect that to continue for a very long time. They make great products, and the team executes really well. And we’re really excited about the choice that Trainium 2 brings. Cost is one of the things that a lot of people worry about when they think about some of these AI workloads, and we think that Trainium 2 can help lower the cost for a lot of customers. And so we’re really excited about that, both for AI companies who are looking to train these massive clusters, [for example] Anthropic is going to be training their next generation, industry-leading model on Trainium 2—We’re building a giant cluster, it’s five times the size of their last cluster—but then the broad swath of folks that are doing inference or using Bedrock or making smaller clusters, I think there’s a good opportunity for customers to lower costs with Trainium.

Those clusters were 30% to 40% cheaper in comparison to Nvidia GPU clusters. What technical innovations are enabling these cost savings?

Number one is that the team has done a fantastic job and produced a really good chip that performs really well. And so from an absolute basis, it gives better performance for some workloads. It’s very workload dependent, but even Apple [says] in early testing, they see up to 50% price performance benefit. That’s massive, if you can really get 30%, 40%, even 50% gains. And some of that is pricing, where we focused on building a chip that we think we can really materially lower the cost to produce for customers. But also then increasing performance—the team has built some innovations, where we see bottlenecks in AI training and inference, that we’ve built into the chips to improve particular function performance, etc. There are probably hundreds of thousands of things that go into delivering that type of performance, but we’re quite excited about it and we’re invested long term in the Trainium line.

The company recently announced the Nova foundation model. Is that aimed at competing directly with the likes of GPT-4 and Gemini?

Yes. We think it’s important to have choice in the realm of these foundational models. Is it a direct competitor? We do think that we can deliver differentiated capabilities and performance. I think that this is such a big opportunity, and has such a material opportunity to change so many different workloads. These really large foundational models—I think there’ll be half a dozen to a dozen of them, probably less than 10. And I think they’ll each be good at different things. [With] our Nova models, we focused on: how do we deliver a really low latency [and] great price performance? They’re actually quite good at doing RAG [Retrieval-Augmented Generation] and agentic workflows. There’s some other models that are better at other things today too. We’ll keep pushing on it. I think there’s room for a number of them, but we’re very excited about the models and the customer reception has been really good.

How does your partnership with Anthropic fit into this strategy?

I think they have one of the strongest AI teams in the world. They have the leading model in the world right now. I think most people consider Sonnet to be the top model for reasoning and for coding and for a lot of other things as well. We get a lot of great feedback from customers on them. So we love that partnership, and we learn a lot from them too, as they build their models on top of Trainium, so there’s a nice flywheel benefit where we get to learn from them, building on top of us. Our customers get to take advantage of leveraging their models inside of Bedrock, and we can grow the business together.

How are you thinking about ensuring safety and responsibility in the development of AI?

It’s super important. And it goes up and down the stack. One of the reasons why customers are excited about models from us, in addition to them being very performant, is that we care a ton about safety. And so there’s a couple of things. One is, you have to start from the beginning when you’re building the models, you think about, how do you have as many controls in there as possible? How do you have safe development to the models? And then I think you need belt and suspenders in this space, because you can, of course, make models say things that you can then say “oh, look what they said.” Practically speaking our customers are trying to integrate these into their applications. And different from being able to produce a recipe for a bomb or something, which we definitely want to have security controls around, safety and control models actually extends specific to very use cases. If you’re building an insurance application, you don’t want your application to give out healthcare advice, whereas, if you’re building healthcare one, you may. So we give a lot of controls to the customers so that they can build guardrails around the responses for models to really help guide how they want models to answer those questions. We launched a number of enhancements at re:Invent including what we call automated reasoning checks, which actually can give you a mathematical proof for if we can be 100% sure that an answer coming back is correct, based on the corpus of data that you have fed into the model. Eliminating hallucinations for a subset of answers is also super important. What’s unsafe in the context of a customer’s application can vary pretty widely, and so we try to give some really good controls for customers to be able to define that, because it’s going to depend on the use cases.

Energy requirements are a huge challenge for this business. Amazon is committed to a net zero emissions target by 2040 and you reported some progress there. How are you planning to continue reducing emissions while investing in large-scale infrastructure for AI?

Number one is you just have to have that long term view as to how we ensure that the world has enough carbon-zero power. We’ve been the single biggest purchaser of renewable energy deals, new energy deals to the grid, so commissioning new solar—solar farms, or wind farms, etc. We’ve been the biggest corporate purchaser each of the last five years, and will continue to do that. Even on that path, that may not be fast enough, and so we’ve actually started investing in nuclear. I do think that that’s an important component. It’ll be part of that portfolio. It can be both large scale nuclear plants as well as, we’ve invested in and we’re very bullish about small modular reactor technology, which is probably six or seven years out from really being in mass production. But we’re optimistic that that can be another solve as part of that portfolio as well.

On the path to carbon zero across the whole business, there’s a lot of invention that’s still going to need to happen. And I won’t sit here and tell you we know all of the answers of how you’re going to have carbon-zero shipping across oceans and airplanes for the retail side of it. And there’s a whole bunch of challenges that the world has to go after, but that’s part of why we made that commitment. We’re putting together plans with with milestones along the way, because it’s an incredibly important target for us. There’s a lot of work to do but we’re committed to doing it.

And as part of that nuclear piece, you’re supporting the development of these nuclear energy projects. What are you doing to ensure that the projects are safe in the communities where they’re deployed?

Look, I actually think one of the worst things for the environment was the mistakes the nuclear industry made back in the ’50s, because it made everyone feel like technology wasn’t that safe, which it may not have been way back then, but, it’s been 70 years, and technology has evolved, and it is actually an incredibly safe, secure technology now. And so a lot of these things are actually fully self-contained and there is no risk of big meltdown or those kind of events that happened before. It’s a super safe technology that has been well-tested and has been in production across the world safely for multiple decades now. There’s still some fear, I think, from people, but, actually, increasingly, many geographies are realizing it’s a quite safe technology.

What do you want to see in terms of policy from the new presidential administration?

We consider the U.S. government to be one of our most important customers that we support up and down the board and will continue to do so. So we’re very excited, and we know many of those folks and are excited to continue to work on that mission together, because we do view it as a mission. It’s both a good business for us, but it’s also an ability to help our country move faster, to control costs, to be more agile. And I think it’s super important, as you think about where the world is going, for our government to have access to the latest technologies. I do think AI and technology is increasingly becoming an incredibly important part of our national defense, probably as much so as guns and other things like that, and so we take that super seriously, and we’re excited to work with the administration. I’m optimistic that President Trump and his administration can help us loosen some of the restrictions on helping build data centers faster. I’m hopeful that they can help us cut through some of that bureaucratic red tape and move faster. I think that’ll be important, particularly as we want to maintain the AI lead for the U.S. ahead of China and others. 

What have you learned about leadership over the course of your career?

We’re fortunate at Amazon to be able to attract some of the most talented, most driven leaders and employees in the world, and I’ve been fortunate enough to get to work with some of those folks [and] to try to clear barriers for them so that they can go deliver outstanding results for our customers. I think if we have a smart team that is really focused on solving customer problems versus growing their own scope of responsibility or internal goals, [and] if you can get those teams focused on that and get barriers out of their way and remove obstacles, then we can deliver a lot. And so that’s largely my job. I view myself as not the expert in any one particular thing. Every one of my team is usually better at whatever we’re trying to do than I am. And my job is to let them go do their job as much as possible, and occasionally connect dots for them on where there’s other parts of the company or other parts of the organization or other customer input that they may not have, that they can integrate and incorporate.

You’ve worked closely with Andy Jassy, is there anything in particular that you’ve learned from watching him as a leader?

I’ve learned a ton. He’s a he’s an exceptional leader. Andy is very good at having very high standards and having high expectations for the teams, and high standards for what we deliver for customers. He had a lot of the vision, together with some of the core folks who were starting AWS, of some important tenets of how we think about the business, of focusing on security and operational excellence and really focusing on how we go deliver for customers. 

What are your priorities for 2025?

Our first priority always is to maintain outstanding security and operational excellence. We want to help customers get ready for that AI transformation that’s going to happen. Part of that, though, is also helping get all of their applications in a place that they can take advantage of AI. So it’s a hugely important priority for us to help customers continue on that migration to the cloud, because if their data is stuck on premise and legacy data stores and other things, they won’t be able to take advantage of AI. So helping people modernize their data and analytics stacks to get that into the cloud and get their data links into a cloud and organized in a way that they can really start to take advantage of AI, is that is a big priority for us. And then it’s just, how do we help scale the AI capabilities, bring the cost down for customers, while [we] keep adding the value. And for 2025, our goal is for customers to move AI workloads really into production that deliver great ROI for their businesses. And that crosses making sure all their data is in the right place, and make sure they have the right compute platforms. We think Trainium is going to be an important part of that. The last bit is helping add some applications on top. We think that we can add [the] extra benefit of helping employees and others get that effectiveness. Some of that is moving contact centers to the cloud. Some of that is helping get conversational assistants and AI assistants in the hands of employees, and so Amazon Q is a big part of that for us. And then it’s also just empowering our broad partner ecosystem to go fast and help customers evolve as well.

TikTok Returns to Apple and Google App Stores in the U.S. After Trump Delayed Ban

14 February 2025 at 07:30
Photo illustration of TikTok in app store and US flag

TikTok has returned to the app stores of Apple and Google in the U.S., after President Donald Trump delayed the enforcement of a TikTok ban.

TikTok, which is operated by Chinese technology firm ByteDance, was removed from Apple and Google’s app stores on Jan. 18 to comply with a law that requires ByteDance to divest the app or be banned in the U.S.

[time-brightcove not-tgx=”true”]

A Google spokesperson declined to comment on the company’s move on Friday. Apple did not immediately respond to an email seeking comment.

Read More: How Google Appears to Be Adapting Its Products to the Trump Presidency

The popular social media app, which has over 170 million American users, previously suspended its services in the U.S. for less than a day before restoring service following assurances from Trump that he would postpone banning the app. The TikTok service suspension briefly prompted thousands of users to migrate to RedNote, a Chinese social media app, while calling themselves “TikTok refugees.”

The TikTok app became available to download again in the U.S. Apple App store and Google Play store after nearly a month. On Trump’s first day in office, he signed an executive order to extend the enforcement of a ban on TikTok to April 5.

TikTok has long faced troubles in the U.S., with the U.S. government claiming that its Chinese ownership and access to the data of millions of Americans makes it a national security risk.

TikTok has denied allegations that it has shared U.S. user data at the behest of the Chinese government, and argued that the law requiring it to be divested or banned violates the First Amendment rights of its American users.

Read More: Who Might Buy TikTok? From MrBeast to Elon Musk, Here Are the Top Contenders

During Trump’s first term in office, he supported banning TikTok but later changed his mind, claiming that he had a “warm spot” for the app. TikTok CEO Shou Chew was among the attendees at Trump’s inauguration ceremony.

Trump has suggested that TikTok could be jointly owned, with half of its ownership being American. Potential buyers include real estate mogul Frank McCourt, Shark Tank investor Kevin O’Leary and popular YouTuber Jimmy Donaldson, also known as MrBeast.

—Zen Soo reported from Hong Kong. AP writer Haleluya Hadero contributed to this story.

Elon Musk Calls for U.S. to ‘Delete Entire Agencies’ From the Federal Government

13 February 2025 at 07:30
Head of the Department of Government Efficiency and CEO of SpaceX, Tesla, and X Elon Musk makes a speech via video-conference during the World Government Summit 2025 in Dubai, United Arab Emirates, on Feb. 13, 2025.

DUBAI, United Arab Emirates — Elon Musk called on Thursday for the United States to “delete entire agencies” from the federal government as part of his push under President Donald Trump to radically cut spending and restructure its priorities.

[time-brightcove not-tgx=”true”]

Musk offered a wide-ranging survey via a videocall to the World Governments Summit in Dubai, United Arab Emirates, of what he described as the priorities of the Trump administration interspersed with multiple references to “thermonuclear warfare” and the possible dangers of artificial intelligence.

“We really have here rule of the bureaucracy as opposed to rule of the people—democracy,” Musk said, wearing a black T-shirt that read: “Tech Support.” He also joked that he was the “White House’s tech support,” borrowing from his profile on the social platform X, which he owns.

Read More: State Department Removes Tesla’s Name From Planned $400M Contract Amid Musk Scrutiny

“I think we do need to delete entire agencies as opposed to leave a lot of them behind,” Musk said. “If we don’t remove the roots of the weed, then it’s easy for the weed to grow back.”

While Musk has spoken to the summit in the past, his appearance on Thursday comes as he has consolidated control over large swaths of the government with Trump’s blessing since assuming leadership of the Department of Government Efficiency. That’s included sidelining career officials, gaining access to sensitive databases and inviting a constitutional clash over the limits of presidential authority.

Musk’s new role imbued his comments with more weight beyond being the world’s wealthiest person through his investments in SpaceX and electric carmaker Tesla.

His remarks also offered a more-isolationist view of American power in the Middle East, where the U.S. has fought wars in both Afghanistan and Iraq since the Sept. 11, 2001, terror attacks.

“A lot of attention has been on USAID for example,” Musk said, referring to Trump’s dismantling of the U.S. Agency for International Development. “There’s like the National Endowment for Democracy. But I’m like, ‘Okay, well, how much democracy have they achieved lately?’”

Read More: Inside the Chaos, Confusion, and Heartbreak of Trump’s Foreign-Aid Freeze

He added that the U.S. under Trump is “less interested in interfering with the affairs of other countries.”

There are “times the United States has been kind of pushy in international affairs, which may resonate with some members of the audience,” Musk said, speaking to the crowd in the UAE, an autocratically ruled nation of seven sheikhdoms.

“Basically, America should mind its own business, rather than push for regime change all over the place,” he said.

He also noted the Trump administration’s focus on eliminating diversity, equity and inclusion work, at one point linking it to AI.

“If hypothetically, AI is designed for DEI, you know, diversity at all costs, it could decide that there’s too many men in power and execute them,” Musk said.

Read More: What Is DEI and What Challenges Does It Face Amid Trump’s Executive Orders?

On AI, Musk said he believed X’s newly updated AI chatbot, Grok 3, would be ready in about two weeks, calling it at one point “kind of scary.”

He criticized Sam Altman’s management of OpenAI, which Musk just led a $97.4 billion takeover bid for, describing it as akin to a nonprofit aimed at saving the Amazon rainforest becoming a “lumber company that chops down the trees.” A court filing Wednesday on Musk’s behalf in the OpenAI dispute said he’d withdraw his bid if the ChatGPT maker drops its plan to convert into a for-profit company.

Musk also announced plans for a “Dubai Loop” project in line with his work in the Boring Company—which is digging tunnels in Las Vegas to speed transit.

A later statement from Dubai’s crown prince, Sheikh Hamdan bin Mohammed Al Maktoum, said the city-state and the Boring Company “will explore the development” of a 17-kilometer (10.5-mile) underground network with 11 stations that could transport over 20,000 passengers an hour. He offered no financial terms for the deal.

“It’s going to be like a wormhole,” Musk promised. “You just wormhole from one part of the city—boom—and you’re out in another part of the city.”

Digital Access Is Critical for Society Say Industry Leaders

12 February 2025 at 22:53
World Governments Summit 2025

Improving connectivity can both benefit those who most need it most and boost the businesses that provide the service. That’s the case telecom industry leaders made during a panel on Feb. 11 at the World Governments Summit in Dubai. 

[time-brightcove not-tgx=”true”]

Titled “Can we innovate our way to a more connected world?”, the panel was hosted by TIME’s Editor-in-Chief Sam Jacobs. During the course of the conversation, Margherita Della Valle, CEO of U.K.-based multinational telecom company Vodafone Group, said, “For society today, connectivity is essential. We are moving from the old divide in the world between the haves and the have-nots towards a new divide, which is between those who have access to connectivity and those who don’t.”

The International Telecommunications Union, a United Nations agency, says that around 2.6 billion people—a third of the global population—don’t have access to the internet. Della Valle noted that of those unconnected people, 300 million live in remote areas that are too far from any form of connectivity infrastructure to get online. Satellites can help to bridge the gap, says Della Valle, whose company plans to launch its commercial direct-to-smartphone satellite service later this year in Europe.

Read More: Column: How We Connected One Billion Lives Through Digital Technology

While digital access is a social issue, companies don’t need to choose between what is best for consumers and what’s best for business, Hatem Dowidar, group CEO of UAE-based telecom company e&, formerly known as Etisalat Group, said. “At the end of the day,” he said, “in our telecom part of the business, when we connect people, [they’re] customers for us, it makes revenue, and we can build on it.” He noted that part of e&’s evolution toward becoming a tech company has involved enabling customers to access fintech, cybersecurity, and cloud computing services.

Mickey Mikitani, CEO of Japanese technology conglomerate Rakuten Group, advocated for a radical transformation of the telecommunications industry, calling the existing telecoms business model “obsolete and old.” Removing barriers to entry to the telecom sector, like the cost of accessing a wireless spectrum—the range of electromagnetic frequencies used to transmit wireless communications—may benefit customers and society more broadly, he said.

The panelists also discussed how artificial intelligence can improve connectivity, as well as the role of networks in supporting the technology’s use. Mikitani noted that his company has been using AI to help it manage networks efficiently with a fraction of the staff its competitors have. Della Valle added, “AI will need strong networks,” emphasizing that countries where networks have not received sufficient investment may struggle to support the technology.

Dowidar called on attendees at the summit from governments around the world to have a dialogue with industry leaders about legislation and regulations in order to overcome the current and potential challenges. Some of those hurdles include ensuring data sovereignty and security within borders, and enabling better training of AI in languages beyond English, he noted.

“It’s very important for everyone to understand the potential that can be unleashed by technology,” Dowidar said, emphasizing the need to train workforces. “AI is going to change the world.”

Safety Takes A Backseat At Paris AI Summit, As U.S. Pushes for Less Regulation

11 February 2025 at 21:35
Attendees at the AI Action Summit in Paris, France, on Monday, Feb. 10, 2025.

Safety concerns are out, optimism is in: that was the takeaway from a major artificial intelligence summit in Paris this week, as leaders from the U.S., France, and beyond threw their weight behind the AI industry. 

Although there were divisions between major nations—the U.S. and the U.K. did not sign a final statement endorsed by 60 nations calling for an “inclusive” and “open” AI sector—the focus of the two-day meeting was markedly different from the last such gathering. Last year, in Seoul, the emphasis was on defining red-lines for the AI industry. The concern: that the technology, although holding great promise, also had the potential for great harm. 

[time-brightcove not-tgx=”true”]

But that was then. The final statement made no mention of significant AI risks nor attempts to mitigate them, while in a speech on Tuesday, U.S. Vice President J.D. Vance said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.” 

The French leader and summit host, Emmanuel Macron, also trumpeted a decidedly pro-business message—underlining just how eager nations around the world are to gain an edge in the development of new AI systems. 

Once upon a time in Bletchley 

The emphasis on boosting the AI sector and putting aside safety concerns was a far cry from the first ever global summit on AI held at Bletchley Park in the U.K. in 2023. Called the “AI Safety Summit”—the French meeting in contrast was called the “AI Action Summit”—its express goal was to thrash out a way to mitigate the risks posed by developments in the technology. 

The second global gathering, in Seoul in 2024, built on this foundation, with leaders securing voluntary safety commitments from leading AI players such as OpenAI, Google, Meta, and their counterparts in China, South Korea, and the United Arab Emirates. The 2025 summit in Paris, governments and AI companies agreed at the time, would be the place to define red-lines for AI: risk thresholds that would require mitigations at the international level.

Paris, however, went the other way. “I think this was a real belly-flop,” says Max Tegmark, an MIT professor and the president of the Future of Life Institute, a non-profit focused on mitigating AI risks. “It almost felt like they were trying to undo Bletchley.”

Anthropic, an AI company focused on safety, called the event a “missed opportunity.”

The U.K., which hosted the first AI summit, said it had declined to sign the Paris declaration because of a lack of substance. “We felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it,” said a spokesperson for Prime Minister Keir Starmer.

Racing for an edge

The shift comes against the backdrop of intensifying developments in AI. In the month or so before the 2025 Summit, OpenAI released an “agent” model that can perform research tasks at roughly the level of a competent graduate student. 

Safety researchers, meanwhile, showed for the first time that the latest generation of AI models can try to deceive their creators, and copy themselves, in an attempt to avoid modification. Many independent AI scientists now agree with the projections of the tech companies themselves: that super-human level AI may be developed within the next five years—with potentially catastrophic effects if unsolved questions in safety research aren’t addressed.

Yet such worries were pushed to the back burner as the U.S., in particular, made a forceful argument against moves to regulate the sector, with Vance saying that the Trump Administration “cannot and will not” accept foreign governments “tightening the screws on U.S. tech companies.” 

He also strongly criticized European regulations. The E.U. has the world’s most comprehensive AI law, called the AI Act, plus other laws such as the Digital Services Act, which Vance called out by name as being overly restrictive in its restrictions related to misinformation on social media. 

The new Vice President, who has a broad base of support among venture capitalists, also made clear that his political support for big tech companies did not extend to regulations that would raise barriers for new startups, thus hindering the development of innovative AI technologies. 

“To restrict [AI’s] development now would not only unfairly benefit incumbents in the space, it would mean paralysing one of the most promising technologies we have seen in generations,” Vance said. “When a massive incumbent comes to us asking for safety regulations, we ought to ask whether that safety regulation is for the benefit of our people, or whether it’s for the benefit of the incumbent.” 

And in a clear sign that concerns about AI risks are out of favor in President Trump’s Washington, he associated AI safety with a popular Republican talking point: the restriction of “free speech” by social media platforms trying to tackle harms like misinformation.

With reporting by Tharin Pillay/Paris and Harry Booth/Paris

J.D. Vance Rails Against ‘Excessive’ AI Regulation at Paris Summit

Key Speakers at the AI Action Summit in Paris

PARIS — U.S. Vice President J.D. Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing artificial intelligence industry in a rebuke to European efforts to curb AI’s risks.

The speech underscored a widening, three-way rift over the future of the technology—one that critics warn could either cement human progress for generations or set the stage for its downfall.

[time-brightcove not-tgx=”true”]

The United States, under President Donald Trump, champions a hands-off approach to fuel innovation, while Europe is tightening the reins with strict regulations to ensure safety and accountability. Meanwhile, China is rapidly expanding AI through state-backed tech giants, vying for dominance in the global race.

The U.S. was noticeably absent from an international document signed by more than 60 nations, including China, making the Trump administration an outlier in a global pledge to promote responsible AI development. The United Kingdom also declined to sign the pledge.

Read More: Inside France’s Effort to Shape the Global AI Conversation

Vance’s debut

At the summit, Vance made his first major policy speech since becoming vice president last month, framing AI as an economic turning point but cautioning that “at this moment, we face the extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”

“But it will never come to pass if overregulation deters innovators from taking the risks necessary to advance the ball,” Vance added.

The 40-year-old vice president, leveraging the AI summit and a security conference in Munich later this week, is seeking to project Trump’s forceful new style of diplomacy.

The Trump administration will “ensure that AI systems developed in America are free from ideological bias,” Vance said and pledged the U.S. would “never restrict our citizens’ right to free speech.”

A global AI pledge—and the U.S. absence

The international document, signed by scores of countries, including European nations, pledged to “promote AI accessibility to reduce digital divides” and “ensure AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy.” It also called for “making AI sustainable for people and the planet” and protecting “human rights, gender equality, linguistic diversity, consumer rights, and intellectual property.”

In a surprise move, China—long criticized for its human rights record—signed the declaration, further widening the distance between America and the rest in the tussle for AI supremacy.

The UK also declined to sign despite agreeing with much of the declaration because it “didn’t provide enough practical clarity on global governance,” said Tom Wells, a spokesman for Prime Minister Keir Starmer.

“We didn’t feel it sufficiently addressed broader questions around national security and the challenge that AI poses to it,” Wells said.

He insisted: “This is not about the U.S. This is about our own national interest, ensuring the balance between opportunity and security.”

A growing divide

Vance also took aim at foreign governments for “tightening the screws” on U.S. tech firms, saying such moves were troubling. His remarks underscored the growing divide between Washington and its European allies on AI governance.

The agreement comes as the E.U. enforces its AI Act, the world’s first comprehensive AI law, which took effect in August 2024.

European Commission President Ursula von der Leyen stressed that, “AI needs the confidence of the people and has to be safe″ and detailed E.U. guidelines intended to standardize the bloc’s AI Act but acknowledged concerns over regulatory burden.

“At the same time, I know that we have to make it easier and we have to cut red tape and we will,” she added.

She also announced that the “InvestAI” initiative had reached a total of €200 billion in AI investments across Europe, including €20 billion dedicated to AI gigafactories.

A race for AI dominance

The summit laid bare a global power struggle over AI—Europe wants strict rules and public funding, China is expanding state-backed AI, and the U.S. is going all-in on a free-market approach.

French President Emmanuel Macron pitched Europe as a “third way”—a middle ground that regulates AI without smothering innovation or relying too much on the U.S. or China.

“We want fair and open access to these innovations for the whole planet,” he said, calling for global AI rules. He also announced fresh investments across Europe to boost the region’s AI standing. “We’re in the race,” he declared.

China, meanwhile, is playing both sides: pushing for control at home while promoting open-source AI abroad.

Chinese Vice Premier Zhang Guoqing, speaking for President Xi Jinping, said Beijing wants to help set global AI rules. At the same time, Chinese officials slammed Western limits on AI access, and China’s DeepSeek chatbot has already triggered security concerns in the U.S. China argues open-source AI will benefit everyone, but critics see it as a way to spread Beijing’s influence.

With China and the U.S. in an AI arms race, Washington is also clashing with Europe.

Vance, a vocal critic of European tech rules, has floated the idea of the U.S. rethinking NATO commitments if Europe cracks down on Elon Musk’s social media platform, X. His Paris visit also included talks on Ukraine, AI’s growing role in global power shifts, and U.S.-China tensions.

How to regulate AI?

Concerns over AI’s potential dangers have loomed over the summit, particularly as nations grapple with how to regulate a technology that is increasingly entwined with defense and warfare.

“I think one day we will have to find ways to control AI or else we will lose control of everything,” said Admiral Pierre Vandier, NATO’s commander who oversees the alliance’s modernization efforts.

Beyond diplomatic tensions, a global public-private partnership is being launched called “Current AI,” aimed at supporting large-scale AI initiatives for the public good.

Analysts see this as an opportunity to counterbalance the dominance of private companies in AI development. However, it remains unclear whether the U.S. will support such efforts.

Separately, a high-stakes battle over AI power is escalating in the private sector.

A group of investors led by Musk—who now heads Trump’s Department of Government Efficiency—has made a $97.4 billion bid to acquire the nonprofit behind OpenAI. OpenAI CEO Sam Altman, attending the Paris summit, said it is “not for sale.”

Pressed on AI regulation, Altman also dismissed the need for further restrictions in Europe. But the head of San Francisco-based Anthropic, an OpenAI competitor, described the summit as a “missed opportunity” to more fully address the urgent global challenges posed by the technology.

“The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit,” said Anthropic CEO Dario Amodei in a written statement.

—AP writers Sylvie Corbet and Kelvin Chan in Paris contributed to this report.

How Google Appears to Be Adapting Its Products to the Trump Presidency

11 February 2025 at 09:00
Google Logo

Google was among the tech companies that donated $1 million to Donald Trump’s 2025 inauguration. It also, like many other companies, pulled back on its internal diversity hiring policies in response to the Trump Administration’s anti-DEI crackdown. And in early February, Google dropped its pledge not to use AI for weapons or surveillance, a move seen as paving the way for closer cooperation with Trump’s government.

[time-brightcove not-tgx=”true”]

Now, users of Google’s consumer products are noticing that a number of updates have been made—seemingly in response to the new administration—to everyday tools like Maps, Calendar, and Search.

Here’s what to know.

Google Maps renames Gulf of Mexico to Gulf of America

Among Trump’s first executive orders was a directive to rename the Gulf of Mexico to Gulf of America and Alaska’s Denali, the highest mountain peak in North America, to its former name Mt. McKinley. Google announced on Jan. 27 that it would “quickly” update its maps accordingly, as soon as the federal Geographic Names Information System (GNIS) is updated. On Monday, Feb. 10, following changes around the same time by the Storm Prediction Center and Federal Aviation Administration, Google announced that, in line with its longstanding convention on naming disputed regions, U.S. based users would now see “Gulf of America,” Mexican users will continue to see “Gulf of Mexico,” while users elsewhere will see “Gulf of Mexico (Gulf of America).”

As of Tuesday, Feb. 11, alternatives Apple Maps and OpenStreetMap still show “Gulf of Mexico.”

Google Calendar removes Pride, Black History Month, and other cultural holidays

Last week, some users noticed that Google removed certain default markers from its calendar, including Pride (June), Black History Month (February), Indigenous Peoples Month (November), and Hispanic Heritage Month (mid-September to mid-October). “Dear Google. Stop sucking up to Trump,” reads one comment on a Google Support forum about the noticed changes.

A Google spokesperson confirmed the removal of some holidays and observances to The Verge but said that such changes began in 2024 because “maintaining hundreds of moments manually and consistently globally wasn’t scalable or sustainable,” explaining that Google Calendar now defers to public holidays and national observances globally listed on timeanddate.com. But not everyone is buying the explanation: “These are lies by Google in order to please the American dictator,” wrote a commenter on another Google Support forum about the changes.

Google Search prohibits autocomplete for ‘impeach Trump’

Earlier this month, social media users also noticed that Google Search no longer suggests an autocomplete for “impeach Trump” when the beginning of the query is typed in the search box, Snopes reported. A Google spokesperson told the fact-checking site that the autocomplete suggestion was removed because the company’s “policies prohibit autocomplete predictions that could be interpreted as a position for or against a political figure. In this case, some predictions were appearing that shouldn’t have been, and we’re taking action to block them.” Google also recently removed predictions for “impeach Biden,” “impeach Clinton,” and others, the spokesperson added, though search results don’t appear to be altered.

How Elon Musk’s Anti-Government Crusade Could Benefit Tesla and His Other Businesses

The Inauguration Of Donald J. Trump As The 47th President

WASHINGTON — Elon Musk has long railed against the U.S. government, saying a crushing number of federal investigations and safety programs have stymied Tesla, his electric car company, and its efforts to create fleets of robotaxis and other self-driving automobiles.

Now, Musk’s close relationship with President Donald Trump means many of those federal headaches could vanish within weeks or months.

[time-brightcove not-tgx=”true”]

On the potential chopping block: crash investigations into Tesla’s partially automated vehicles; a Justice Department criminal probe examining whether Musk and Tesla have overstated their cars’ self-driving capabilities; and a government mandate to report crash data on vehicles using technology like Tesla’s Autopilot.

The consequences of such actions could prove dire, say safety advocates who credit the federal investigations and recalls with saving lives.

“Musk wants to run the Department of Transportation,” said Missy Cummings, a former senior safety adviser at the National Highway Traffic Safety Administration. “I’ve lost count of the number of investigations that are underway with Tesla. They will all be gone.”

Within days of Trump taking office, the White House and Musk began waging an unbridled war against the federal government—freezing spending and programs while sacking a host of career employees, including prosecutors and government watchdogs typically shielded from such brazen dismissals without cause.

The actions have sparked outcries from legal scholars who say the Trump administration’s actions are without modern-day precedent and are already upending the balance of power in Washington.

The Trump administration has not yet declared any actions that could benefit Tesla or Musk’s other companies. However, snuffing out federal investigations or jettisoning safety initiatives would be an easier task than their assault on regulators and the bureaucracy.

Investigations into companies like Tesla can be shut down overnight by the new leaders of agencies. And safety programs created through an agency order or initiative—not by laws passed by Congress or adopted through a formal regulatory process—can also be quickly dissolved by new leaders. Unlike many of the dismantling efforts that Trump and Musk have launched in recent weeks, stalling or killing such probes and programs would not be subject to legal challenges.

As such, the temporal and fragile nature of the federal probes and safety programs make them easy targets for those seeking to weaken government oversight and upend long-established norms.

“Trump’s election, and the bromance between Trump and Musk, will essentially lead to the defanging of a regulatory environment that’s been stifling Tesla,” said Daniel Ives, a veteran Wall Street technology and automobile industry analyst.

Musk’s empire

Among Musk’s businesses, the federal government’s power over Tesla to investigate, order recalls, and mandate crash data reporting is perhaps the most wide-ranging. However, the ways the Trump administration could quickly ease up on Tesla also apply in some measure to other companies in Musk’s sprawling business empire.

A host of Musk’s other businesses—such as his aerospace company SpaceX and his social media company X—are subjects of federal investigations.

Musk’s businesses are also intertwined with the federal government, pocketing hundreds of millions of dollars each year in contracts. SpaceX, for example, has secured nearly $20 billion in federal funds since 2008 to ferry astronauts and satellites into space. Tesla, meanwhile, has received $41.9 million from the U.S. government, including payment for vehicles provided to some U.S. embassies.

Musk, Tesla’s billionaire CEO, has found himself in his newly influential position by enthusiastically backing Trump’s third bid for the White House. He was the largest donor to the campaign, plunging more than $270 million of his vast fortune into Trump’s political apparatus, most of it during the final months of the heated presidential race.

Those donations and his efforts during the campaign—including the transformation of his social media platform X into a firehose of pro-Trump commentary—have been rewarded by Trump, who has tapped the entrepreneur to oversee efforts to slash government regulations and spending.

Read More: Inside Elon Musk’s War on Washington

As the head of the Department of Government Efficiency, Musk operates out of an office in the Eisenhower Executive Office Building, where most White House staff work and from where he has launched his assault on the federal government. Musk’s power under DOGE is being challenged in the courts.

Even before Trump took office, there were signs that Musk’s vast influence with the new administration was registering with the public—and paying dividends for Tesla.

Tesla’s stock surged more than 60% by December. Since then, its stock price has dropped, but still remains 40% higher than it was before Trump’s election.

“For Musk,” said Ives, the technology analyst, “betting on Trump is a poker move for the ages.”

Proposed actions will help Tesla

The White House did not respond to questions about how it would handle investigations and government oversight involving Tesla or other Musk companies. A spokesman for the transition team said last month that the White House would ensure that DOGE and “those involved with it are compliant with all legal guidelines and conflicts of interest.”

In the weeks before Trump took office on Jan. 20, the president-elect’s transition team recommended changes that would benefit the billionaire and his car company, including scrapping the federal order requiring carmakers to report crash data involving self-driving and partially automated technology.

The action would be a boon for Tesla, which has reported a vast majority of the crashes that triggered a series of investigations and recalls.

The transition team also recommended shelving a $7,500 consumer tax credit for electric vehicle purchases, something Musk has publicly called for.

“Take away the subsidies. It will only help Tesla,” Musk wrote in a post on X as he campaigned and raised money for Trump in July.

Auto industry experts say the move would have a nominal impact on Tesla—by far the largest electric vehicle maker in the U.S.—but have a potentially devastating impact on its competitors in the EV sector since they are still struggling to secure a foothold in the market.

Musk did not respond to requests for comment. Before the election, he posted a message on X, saying he had never asked Trump “for any favors, nor has he offered me any.”

Although most of the changes that Musk might seek for Tesla could unfold quickly, there is one long-term goal that could impact the autonomous vehicle industry for decades to come.

Though nearly 30 states have rules that specifically govern self-driving cars, the federal government has yet to craft such regulations.

During a late October call with Tesla investors, as Musk was pouring hundreds of millions of dollars into Trump’s campaign, he signaled support for having the federal government create these rules.

“There should be a federal approval process for autonomous vehicles,” Musk said on the call. “If there’s a department of government efficiency, I’ll try to help make that happen.”

Musk leads that very organization.

Those affected by Tesla crashes worry about lax oversight

People whose lives have been forever changed by Tesla crashes fear that dangerous and fatal accidents may increase if the federal government’s investigative and recall powers are restricted.

They say they worry that the company may otherwise never be held accountable for its failures, like the one that took the life of 22-year-old Naibel Benavides Leon.

The college student was on a date with her boyfriend, gazing at the stars on the side of a rural Florida road, when they were struck by an out-of-control Tesla driving on Autopilot—a system that allows Tesla cars to operate without driver input. The car had blown through a stop sign, a flashing light and five yellow warning signs, according to dashcam video and a police report.

Benavides Leon died at the scene; her boyfriend, Dillon Angulo, suffered injuries but survived. A federal investigation determined that Autopilot in Teslas at this time was faulty and needed repairs.

“We, as a family, have never been the same,” said Benavides Leon’s sister, Neima. “I’m an engineer, and everything that we design and we build has to be by important codes and regulations. This technology cannot be an exception.”

“It has to be investigated when it fails,” she added. “Because it does fail.”

Tesla’s lawyers did not respond to requests for comment. In a statement on Twitter in December 2023, Tesla pointed to an earlier lawsuit the Benavides Leon’s family had brought against the driver who struck the college student. He testified that despite using Autopilot, “I was highly aware that it was still my responsibility to operate the vehicle safely.”

Tesla also said the driver “was pressing the accelerator to maintain 60 mph,” an action that effectively overrode Autopilot, which would have otherwise restricted the speed to 45 mph on the rural route, something Benavides Leon’s attorney disputes.

Federal probes into Tesla

The federal agency that has the most power over Tesla—and the entire automobile industry—is the National Highway Traffic Safety Administration, which is part of the Department of Transportation.

NHTSA sets automobile safety standards that must be met before vehicles can enter the marketplace. It also has a quasi-law enforcement arm, the Office of Defects Investigation, which has the power to launch probes into crashes and seek recalls for safety defects.

The agency has six pending investigations into Tesla’s self-driving technology, prompted by dozens of crashes that took place when the computerized systems were in use.

Other federal agencies are also investigating Musk and Tesla, and all of those probes could be sidelined by Musk-friendly officials:

—The Securities and Exchange Commission and Justice Department are separately investigating whether Musk and Tesla overstated the autonomous capabilities of their vehicles, creating dangerous situations in which drivers may over rely on the car’s technology.

—The Justice Department is also probing whether Tesla misled customers about how far its electric vehicles can travel before needing a charge.

—The National Labor Relations Board is weighing 12 unfair labor practice allegations leveled by workers at Tesla plants.

—The Equal Employment Opportunity Commission is asking a federal judge to force Tesla to enact reforms and pay compensatory and punitive damages and backpay to Black employees who say they were subjected to racist attacks. In a federal lawsuit, the agency has alleged that supervisors and other employees at Tesla’s plant in Fremont, California, routinely hurled racist insults at Black employees.

Experts said most, if not all, of those investigations could be shut down, especially at the Justice Department where Trump has long shown a willingness to meddle in the department’s affairs. The Trump administration has already ordered the firing of dozens of prosecutors who handled the criminal cases from the Jan. 6, 2021 attack on the Capitol.

“DOJ is not going to be prosecuting Elon Musk,” said Peter Zeidenberg, a former Assistant U.S. Attorney in the Justice Department’s public integrity section who served during the Clinton and George H.W. Bush administrations. “I’d expect that any investigations that were ongoing will be ground to an abrupt end.”

Trump has also taken steps to gain control of the NLRB and EEOC. Last month, he fired Democratic members of the board and commission, breaking with decades of precedent. One member has sued, and two others are exploring legal options.

Tesla and Musk have denied wrongdoing in all those investigations and are fighting the probes.

The small safety agency in Musk’s crosshairs

The federal agency that appears to have enjoyed the most success in changing Tesla’s behavior is NHTSA, an organization of about 750 staffers that has forced the company to hand over crash data and cooperate in its investigations and requested recalls.

“NHTSA has been a thorn in Musk’s side for over the last decade, and he’s grappled with almost every three-letter agency in the Beltway,” said Ives, the Wall Street analyst who covers the technology sector and automobile industry. “That’s all created what looks to be a really big soap opera in 2025.”

Musk has repeatedly blamed the federal government for impeding Tesla’s progress and creating negative publicity with recalls of his cars after its self-driving technology malfunctions or crashes.

“The word ‘recall’ should be recalled,” Musk posted on Twitter (now X) in 2014. Two years ago, he posted, “The word ‘recall’ for an over-the-air software update is anachronistic and just flat wrong!”

Michael Brooks, executive director of the Center for Auto Safety, a non-profit consumer advocacy group, said some investigations might continue under Trump, but a recall is less likely to happen if a defect is found.

As with most car companies, Tesla’s recalls have so far been voluntary. The threat of public hearings about a defect that precedes a NHTSA-ordered recall has generally prompted car companies to act on their own.

That threat could be easily stripped away by the new NHTSA administrator, who will be a Trump appointee.

“If there isn’t a threat of recall, will Tesla do them?” Brooks said. “Unfortunately, this is where politics seeps in.”

NHTSA conducting several probes of Tesla

Among the active NHTSA investigations, several are examining fundamental aspects of Tesla’s partially automated driving systems that were in use when dozens of crashes occurred.

An investigation of Tesla’s “Full Self-Driving” system started in October after Tesla reported four crashes to NHTSA in which the vehicles had trouble navigating through sun glare, fog and airborne dust. In one of the accidents, an Arizona woman was killed after stopping on a freeway to help someone involved in another crash.

Under pressure from NHTSA, Tesla has twice recalled the “Full Self-Driving” feature for software updates. The technology—the most advanced of Tesla’s Autopilot systems—is supposed to allow drivers to travel from point to point with little human intervention. But repeated malfunctions led NHTSA to recently launch a new inquiry that includes a crash in July that killed a motorcyclist near Seattle.

NHTSA announced its latest investigation in January into “Actually Smart Summon,” a Tesla technology that allows drivers to remotely move a car, after the agency learned of four incidents from a driver and several media reports.

The agency said that in each collision, the vehicles were using the system that Tesla pushed out in a September software update that was “failing to detect posts or parked vehicles, resulting in a crash.” NHTSA also criticized Tesla for failing to notify the agency of those accidents.

NHTSA is also conducting a probe into whether a 2023 recall of Autopilot, the most basic of Tesla’s partially automated driver assistance systems, was effective.

That recall was supposed to boost the number of controls and alerts to keep drivers engaged; it had been prompted by an earlier NHTSA investigation that identified hundreds of crashes involving Autopilot that resulted in scores of injuries and more than a dozen deaths.

In a letter to Tesla in April, agency investigators noted that crashes involving Autopilot continue and that they could not observe a difference between warnings issued to drivers before or after the new software had been installed.

Critics have said that Teslas don’t have proper sensors to be fully self-driving. Nearly all other companies working on autonomous vehicles use radar and laser sensors in addition to cameras to see better in the dark or in poor visibility conditions. Tesla, on the other hand, relies only on cameras to spot hazards.

Musk has said that human drivers rely on their eyesight, so autonomous cars should be able to also get by with just cameras. He has called technology that relies on radar and light detection to discern objects a “fool’s errand.”

Bryant Walker Smith, a Stanford Law School scholar and a leading automated driving expert, said Musk’s contention that the federal government is holding him back is not accurate. The problem, Smith said, is that Tesla’s autonomous vehicles cannot perform as advertised.

“Blaming the federal government for holding them back, it provides a convenient, if dubious, scapegoat for the lack of an actual automated driving system that works,” Smith said.

Smith and other autonomous vehicle experts say Musk has felt pressure to provide Tesla shareholders with excuses for repeated delays in rolling out its futuristic cars. The financial stake is enormous, which Musk acknowledged during a 2022 interview. He said the development of a fully self-driving vehicle was “really the difference between Tesla being worth a lot of money and being worth basically zero.”

The collisions from Tesla’s malfunctioning technology on its vehicles have led not only to deaths but also catastrophic injuries that have forever altered people’s lives.

Attorneys representing people injured in Tesla crashes—or who represent surviving family members of those who died—say without NHTSA, the only other way to hold the car company accountable is through civil lawsuits.

“When government can’t do it, then the civil justice system is left to pick up the slack,” said Brett Schreiber, whose law firm is handling four Tesla cases.

However, Schreiber and other lawyers say if the federal government’s investigative powers don’t remain intact, Tesla may also not be held accountable in court.

In the pending wrongful death lawsuit that Neima Benavides Leon filed against Tesla after her sister’s death, her attorney told a Miami district judge the lawsuit would have likely been dropped if NHTSA hadn’t investigated and found defects with the Autopilot system.

“All along we were hoping that the NHTSA investigation would produce what it did, in fact, end up producing, which is a finding of product defect and a recall,” attorney Doug Eaton said during a March court hearing. “And we had told you very early on in the case if NHTSA had not found that, we may very well drop the case. But they did, in fact, find this.”

Elon Musk Leads Group Seeking to Buy OpenAI. Sam Altman Says ‘No Thank You’

11 February 2025 at 02:00
The logo of 'OpenAI' is displayed on a mobile phone screen in front of a computer screen displaying the photographs of Elon Musk and Sam Altman in Ankara, Turkiye on March 14, 2024.

A group of investors led by Elon Musk is offering about $97.4 billion to buy the nonprofit behind OpenAI, escalating a dispute with the artificial intelligence company that Musk helped found a decade ago.

[time-brightcove not-tgx=”true”]

Musk and his own AI startup, xAI, and a consortium of investment firms want to take control of the ChatGPT maker and revert it to its original charitable mission as a nonprofit research lab, according to Musk’s attorney Marc Toberoff.

OpenAI CEO Sam Altman quickly rejected the unsolicited bid on Musk’s social platform X, saying, “no thank you but we will buy Twitter for $9.74 billion if you want.”

Musk bought Twitter, now called X, for $44 billion in 2022.

Musk and Altman, who together helped start OpenAI in 2015 and later competed over who should lead it, have been in a long-running feud over the startup’s direction since Musk resigned from its board in 2018.

Musk, an early OpenAI investor and board member, sued the company last year, first in a California state court and later in federal court, alleging it had betrayed its founding aims as a nonprofit research lab that would benefit the public good by safely building better-than-human AI. Musk had invested about $45 million in the startup from its founding until 2018, Toberoff has said.

The sudden success of ChatGPT two years ago brought worldwide fame and a new revenue stream to OpenAI and also heightened the internal battles over the future of the organization and the advanced AI it was trying to develop. Its nonprofit board fired Altman in late 2023. He came back days later with a new board.

Now a fast-growing business still controlled by a nonprofit board bound to its original mission, OpenAI last year announced plans to formally change its corporate structure. But such changes are complicated. Tax law requires money or assets donated to a tax-exempt organization to remain within the charitable sector.

If the initial organization becomes a for-profit, generally, a conversion is needed where the for-profit pays the fair market value of the assets to another charitable organization. Even if the nonprofit OpenAI continues to exist in some way, some experts argue it would have to be paid fair market value for any assets that get transferred to its for-profit subsidiaries.

Lawyers for OpenAI and Musk faced off in a California federal court last week as a judge weighed Musk’s request for a court order that would block the ChatGPT maker from converting itself to a for-profit company.

U.S. District Judge Yvonne Gonzalez Rogers hasn’t yet ruled on Musk’s request but in the courtroom said it was a “stretch” for Musk to claim he will be irreparably harmed if she doesn’t intervene to stop OpenAI from moving forward with its planned transition.

But the judge also raised concerns about OpenAI and its relationship with business partner Microsoft and said she wouldn’t stop the case from moving to trial as soon as next year so a jury can decide.

“It is plausible that what Mr. Musk is saying is true. We’ll find out. He’ll sit on the stand,” she said.

Along with Musk and xAI, others backing the bid announced Monday include Baron Capital Group, Valor Management, Atreides Management, Vy Fund, Emanuel Capital Management and Eight Partners VC.

Toberoff said in a statement that if Altman and OpenAI’s current board “are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time.”

Musk’s attorney also shared a letter he sent in early January to the attorneys general of California, where OpenAI operates, and Delaware, where it is incorporated.

Since both state offices must “ensure any such transactional process relating to OpenAI’s charitable assets provides at least fair market value to protect the public’s beneficial interest, we assume you will provide a process for competitive bidding to actually determine that fair market value,” Toberoff wrote, asking for more information on the terms and timing of that bidding process.

OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.

Refik Anadol Sees Artistic Possibilities in Data

10 February 2025 at 22:51

To Refik Anadol, data is a creative force.

“For as long as I can remember, I have imagined data as more than just information—I have seen it as a living, breathing material, a pigment with infinite possibilities,” the Turkish-American artist said on Monday during his acceptance speech at the TIME100 AI Impact Awards in Dubai.

Anadol was one of four leaders shaping the future of AI to be recognized at TIME’s fourth-annual Impact Awards ceremony in the city. California Institute of Technology professor Anima Anandkumar, musician Grimes, and Arvind Krishna, the CEO, chairman, and president of IBM, also accepted awards as a part of the night’s festivities, which featured a performance by Emirati soul singer Arqam Al Abri.

[time-brightcove not-tgx=”true”]
[video id=VTGjWYNl autostart="viewable"]

Anadol has spent over a decade showing the world that art can come from anywhere—even machines. As a media artist and the director and co-founder of Refik Anadol Studio, he has used AI to pioneer new forms of creativity, producing data paintings and data sculptures in tandem with the technology. 

“Over the past decade, my journey with AI has been a relentless pursuit of collaboration between humans and machines, between memory and imagination, between technology and nature,” he said in his speech. 

This year, Anadol and his team will open “Dataland,” the world’s first AI art museum, in Los Angeles—an achievement no doubt informed by years spent producing dozens of other works that have been shown across the world.

It’s all part of his plan to make art that challenges the limits of creativity. “Art, in my vision, has never been confined to a single culture, place, or audience,” Anadol said. “It belongs to everyone.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Anima Anandkumar Highlights AI’s Potential to Solve ‘Hard Scientific Challenges’

10 February 2025 at 22:39

Anima Anandkumar is using AI to help solve the world’s challenges faster. She has used the technology to speed up prediction models in an effort to get ahead of extreme weather, and to work on sustainable nuclear fusion simulations so as to one day safely harness the energy source.

Accepting a TIME100 AI Impact Award in Dubai on Monday, Anandkumar—a professor at California Institute of Technology who was previously the senior director of AI research at Nvidia—credited her engineer parents with setting an example for her. “Having a mom who is an engineer was just such a great role model right at home.” Her parents, who brought computerized manufacturing to her hometown in India, opened up her world, she said. 

[time-brightcove not-tgx=”true”]

“Growing up as a young girl, I didn’t think of computer programs as something that merely resided within a computer, but [as something] that touched the physical world and produced these beautiful and precise metal parts,” said Anandkumar. “As I pursued AI research over the last two decades, this memory continued to inspire me to connect the physical and digital worlds together.”

[video id=SsHE7kuF autostart="viewable"]

Neural operators—a type of AI framework that can learn across multiple scales—are key to Anandkumar’s efforts. Using neural operators, Anandkumar and her collaborators are able to build systems “with universal physical understanding that can simulate any physical process, generate novel engineering designs that were previously out of reach, and make new scientific discoveries,” she said. 

Speaking about her work in 2022 with an interdisciplinary team from Nvidia, Caltech, and other academic institutions, she noted, “I am proud of our work in weather forecasting where, using neural operators, we built the first AI-based high-resolution weather model called FourCastNet.” This model is tens of thousands of times faster than traditional weather models and often more accurate than existing systems when predicting extreme events, such as heat waves and hurricanes, she said.

“Neural operators are helping us get closer to solving hard scientific challenges,” she said. After outlining some of the technology’s other possible uses, including designing better drones, rockets, sustainable nuclear reactors, and medical devices, Anandkumar added, “To me, this is just the beginning.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Arvind Krishna Celebrates the Work of a Pioneer at the TIME100 AI Impact Awards

10 February 2025 at 22:33

Arvind Krishna, CEO, chairman and president of IBM, used his acceptance speech at the TIME100 AI Impact Awards on Monday to acknowledge pioneering computer scientist and mathematician Claude Shannon, calling him one of the “unsung heroes of today.”

Krishna, who accepted his award at a ceremony in Dubai alongside musician Grimes, California Institute of Technology professor Anima Anandkumar, and artist Refik Anadol, said of Shannon, “He would come up with the ways that you can convey information, all of which has stood the test until today.” 

[time-brightcove not-tgx=”true”]

In 1948, Shannon—now known as the father of the information age—published “A Mathematical Theory of Communication,” a transformative paper that, by proposing a simplified way of quantifying information via bits, would go on to fundamentally shape the development of information technology—and thus, our modern era. In his speech, Krishna also pointed to Shannon’s work building robotic mice that solved mazes as an example of his enjoyment of play within his research.

[video id=RwzSZqCE autostart="viewable"]

Krishna, of course, has some familiarity with what it takes to be at the cutting edge. Under his leadership, IBM, known as a pioneer in artificial intelligence itself, is carving its own niche in specialized AI and invests heavily in quantum computing research—the mission to build a machine based on quantum principles, which could carry out calculations much faster than existing computers. The business also runs a cloud computing service, designs software, and operates a consulting business.

Krishna said that he most enjoyed Shannon’s work because the researcher’s “simple insights” have helped contribute to the “most sophisticated communication systems” of today, including satellites. Speaking about Shannon’s theoretical work, which Krishna said was a precursor to neural networks, he noted, “I think we can give him credit for building the first elements of artificial intelligence.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Inside France’s Effort to Shape the Global AI Conversation

6 February 2025 at 15:20
French President's Special Envoy on AI, Anne Bouverot, prepares for the AI Action Summit at the Quai d'Orsay in Paris.

One evening early last year, Anne Bouverot was putting the finishing touches on a report when she received an urgent phone call. It was one of French President Emmanuel Macron’s aides offering her the role as his special envoy on artificial intelligence. The unpaid position would entail leading the preparations for the France AI Action Summit—a gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AI’s future. Set to take place on Feb. 10 and 11 at the presidential Élysée Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in May—and the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.’s inaugural AI Safety Summit. After weighing the offer, Bouverot, who was at the time the co-chair of France’s AI Commission, accepted. 

[time-brightcove not-tgx=”true”]

But France’s Summit won’t be like the others. While the U.K.’s Summit centered on mitigating catastrophic risks—such as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human control—France has rebranded the event as the ‘AI Action Summit,’ shifting the conversation towards a wider gamut of risks—including the disruption of the labor market and the technology’s environmental impact—while also keeping the opportunities front and center. “We’re broadening the conversation, compared to Bletchley Park,” Bouverot says. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.

Some welcome the pivot as a much-needed correction to what they see as hype and hysteria around the technology’s dangers. Others, including some of the world’s foremost AI scientists—including some who helped develop the field’s fundamental technologies—worry that safety concerns are being sidelined. “The view within the community of people concerned about safety is that it’s been downgraded,” says Stuart Russell, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and the co-author of the authoritative textbook on AI used at over 1,500 universities.

“On the face of it, it looks like the downgrading of safety is an attempt to say, ‘we want to charge ahead, we’re not going to over-regulate. We’re not going to put any obligations on companies if they want to do business in France,”‘ Russell says.

France’s Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI’s benefits globally. But if the recent leaps in AI capabilities—and emerging signs of deceptive behavior—are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.


Bouverot is no stranger to the politics of emerging technology. In the early 2010s, she held the director general position at the Global System for Mobile Communications Association, an industry body that promotes interoperable standards among cellular providers globally. “In a nutshell, that role—which was really telecommunications—was also diplomacy,” she says. From there, she took the helm at Morpho (now IDEMIA), steering the French facial recognition and biometrics firm until its 2017 acquisition. She later co-founded the Fondation Abeona, a nonprofit that promotes “responsible AI.” Her work there led to her appointment as co-chair of France’s AI Commission, where she developed a strategy for how the nation could establish itself as a global leader in AI.

Bouverot’s growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale supérieure—a top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. “This is how much of a crystal ball I had,” she joked on Washington AI Network’s podcast in December, acknowledging the irony of her early skepticism, given AI’s impact today. 

Under Bouverot’s leadership, safety will remain a feature, but rather than the summit’s sole focus, it is now one of five core themes. Others include: AI’s use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Paris’s agenda does not mention the possibility that an AI system could escape human control. “There’s no evidence of that risk today,” Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. “There was a bit of a science fiction moment,” she says, adding that the global discourse has since shifted. 

Back in late 2023, as the U.K.’s summit approached, signs of a shift in the conversation around AI’s risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it “a waste of time” and a “doom-obsessed mess.” Researchers, who had studied AI’s downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk “was really problematic.”

“Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. “I think a lot of the existential harm rhetoric doesn’t translate to what policy makers can specifically do now,” she adds.

On the U.K. Summit’s opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: “When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him?” she asked, in an effort to highlight the near-term risks of AI over the summit’s focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. “We didn’t make that change in the global discourse,” Bouverot says, adding that the focus is now squarely on the technology’s tangible impacts. “We’re quite happy that this is actually the conversation that people are having now.”


One of the actions expected to emerge from France’s Summit is a new yet-to-be-named foundation that will aim to ensure AI’s benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind’s AI model that predicts protein structures with unprecedented precision—potentially accelerating research and drug discovery—as an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. “We need to enable more databases like this,” Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AI’s development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says.

Her second priority is creating an informal “Coalition for Sustainable AI.” AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI’s environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. “Clearly AI is happening and being developed. We want it to be developed in a sustainable way,” Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative

Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AI’s climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AI’s emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPU—the kind of computer chips used for running AI—energy usage. “Nobody has all of the numbers,” she says, but the coalition may help put the pieces together.


Given AI’s recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGI—a system that surpasses humans in most regards—could potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quickly—if ever—we’ll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAI’s Altman wrote in a blog post: “We are now confident we know how to build AGI.” Speaking on a panel at Davos last month, Dario Amodei, the CEO of rival AI company, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year. 

Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” More recently, Altman has downplayed AI’s risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the company’s direction. “Over the past years, safety culture and processes have taken a backseat to shiny products,” former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanity’s control.

Predictions of AGI’s imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; by December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently.

This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAI’s recent o1 and a version of Anthropic’s Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caught—often doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropic’s model, Claude, strategically misleading its creators during training. “Exactly the things that people have been writing about for the last 10 years are happening now,” Russell says. “As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they’re doing it, and so on.”


Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three “Godfathers of AI” for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. “There’s very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction,” Bengio says. “A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss-of-control risk,” he adds.

At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. “They did a lot to move the needle in the right direction,” Bengio says, but he adds that these measures are not close to sufficient. “In my personal opinion, the magnitude of the potential transformations that are likely to happen once we approach AGI are so radical,” Bengio says, “that my impression is most people, most governments, underestimate this a whole lot.”

But rather than pushing for new pledges, in Paris the focus will be streamlining existing ones—making them compatible with existing regulatory frameworks and each other. “There’s already quite a lot of commitments for AI companies,” Bouverot says. This light-touch stance mirrors France’s broader AI strategy, where homegrown company Mistral AI has emerged as Europe’s leading challenger in the field. Both Mistral and the French government lobbied for softer regulations under the E.U.’s comprehensive AI Act. France’s Summit will feature a business-focused event, hosted across town at Station F, France’s largest start-up hub. “To me, it looks a lot like they’re trying to use it to be a French industry fair,” says Andrea Miotti, the executive director of Control AI, a non-profit that advocates for guarding against existential risks from AI. “They’re taking a summit that was focused on safety and turning it away. In the rhetoric, it’s very much like: let’s stop talking about the risks and start talking about the great innovation that we can do.” 

The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair France’s Summit. In March, India issued an advisory that pushed companies to obtain the government’s permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In California—home to many of the top AI developers—a landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of California’s legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Biden’s sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation” to secure U.S. leadership over the technology.

Markus Anderljung, director of policy and research at AI safety think-tank the Centre for the Governance of AI, says that safety could be woven into the France Summit’s broader goals. For instance, initiatives to distribute AI’s benefits globally might be linked to commitments from recipient countries to uphold safety best practices. He says he would like to see the list of signatories of the Frontier AI Safety Commitments signed in Seoul expanded —particularly in China, where only one company, Zhipu, has signed. But Anderljung says that for the commitments to succeed, accountability mechanisms must also be strengthened. “Commitments without follow-ups might just be empty words,” he says, ”they just don’t matter unless you know what was committed to actually gets done.”

A focus on AI’s extreme risks does not have to come at the exclusion of other important issues. “I know that the organizers of the French summit care a lot about [AI’s] positive impact on the global majority,” Bengio says. “That’s a very important mission that I embrace completely.” But he argues the potential severity of loss-of-control risks warrant invoking the precautionary principle—the idea that we should take preventive measures, even absent scientific consensus. It’s a principle that has been invoked by U.N. declarations aimed at protecting the environment, and in sensitive scientific domains like human cloning.

But for Bouverot, it is a question of balancing competing demands. “We don’t want to solve everything—we can’t, nobody can,” she says, adding that the focus is on making AI more concrete. “We want to work from the level of scientific consensus, whatever level of consensus is reached.”


In mid December, in France’s foreign ministry, Bouverot, faced an unusual dilemma. Across the table, a South Korean official explained his country’s eagerness to join the summit. But days earlier, South Korea’s political leadership was thrown into turmoil when President Yoon Suk Yeol, who co-chaired the previous summit’s leaders’ session, declared martial law before being swiftly impeached, leaving the question of who will represent the country—and whether officials could attend at all—up in the air. 

There is a great deal of uncertainty—not only over the pace AI will advance, but to what degree governments will be willing to engage. France’s own government collapsed in early December after Prime Minister Michel Barnier was ousted in a no-confidence vote, marking the first such collapse since the 1960s. And, as Trump, long skeptical of international institutions, returns to the oval office, it is yet to be seen how Vice President Vance will approach the Paris meeting.

When reflecting on the technology’s uncertain future, Bouverot finds wisdom in the words of another French pioneer who grappled with powerful but nascent technology. “I have this quote from Marie Curie, which I really love,” Bouverot says. Curie, the first woman to win a Nobel Prize, revolutionized science with her work on radioactivity. She once wrote: “Nothing in life is to be feared, it is only to be understood.” Curie’s work ultimately cost her life—she died at a relatively young 66 from a rare blood disorder, likely caused by prolonged radiation exposure.

❌
❌