Normal view

There are new articles available, click to refresh the page.
Before yesterdayTech – TIME

TikTok Returns to Apple and Google App Stores in the U.S. After Trump Delayed Ban

14 February 2025 at 07:30
Photo illustration of TikTok in app store and US flag

TikTok has returned to the app stores of Apple and Google in the U.S., after President Donald Trump delayed the enforcement of a TikTok ban.

TikTok, which is operated by Chinese technology firm ByteDance, was removed from Apple and Google’s app stores on Jan. 18 to comply with a law that requires ByteDance to divest the app or be banned in the U.S.

[time-brightcove not-tgx=”true”]

Read More: How Google Appears to Be Adapting Its Products to the Trump Presidency

The popular social media app, which has over 170 million American users, previously suspended its services in the U.S. for a day before restoring service following assurances from Trump that he would postpone banning the app. The TikTok service suspension briefly prompted thousands of users to migrate to RedNote, a Chinese social media app, while calling themselves “TikTok refugees.”

The TikTok app became available to download again in the U.S. Apple App store and Google Play store after nearly a month. On Trump’s first day in office, he signed an executive order to extend the enforcement of a ban on TikTok to April 5.

TikTok has long faced troubles in the U.S., with the U.S. government claiming that its Chinese ownership and access to the data of millions of Americans makes it a national security risk.

TikTok has denied allegations that it has shared U.S. user data at the behest of the Chinese government, and argued that the law requiring it to be divested or banned violates the First Amendment rights of its American users.

Read More: Who Might Buy TikTok? From MrBeast to Elon Musk, Here Are the Top Contenders

During Trump’s first term in office, he supported banning TikTok but later changed his mind, claiming that he had a “warm spot” for the app. TikTok CEO Shou Chew was among the attendees at Trump’s inauguration ceremony.

Trump has suggested that TikTok could be jointly owned, with half of its ownership being American. Potential buyers include real estate mogul Frank McCourt, Shark Tank investor Kevin O’Leary and popular YouTuber Jimmy Donaldson, also known as MrBeast.

—Zen Soo reported from Hong Kong.

Elon Musk Calls for U.S. to ‘Delete Entire Agencies’ From the Federal Government

13 February 2025 at 07:30
Head of the Department of Government Efficiency and CEO of SpaceX, Tesla, and X Elon Musk makes a speech via video-conference during the World Government Summit 2025 in Dubai, United Arab Emirates, on Feb. 13, 2025.

DUBAI, United Arab Emirates — Elon Musk called on Thursday for the United States to “delete entire agencies” from the federal government as part of his push under President Donald Trump to radically cut spending and restructure its priorities.

[time-brightcove not-tgx=”true”]

Musk offered a wide-ranging survey via a videocall to the World Governments Summit in Dubai, United Arab Emirates, of what he described as the priorities of the Trump administration interspersed with multiple references to “thermonuclear warfare” and the possible dangers of artificial intelligence.

“We really have here rule of the bureaucracy as opposed to rule of the people—democracy,” Musk said, wearing a black T-shirt that read: “Tech Support.” He also joked that he was the “White House’s tech support,” borrowing from his profile on the social platform X, which he owns.

Read More: State Department Removes Tesla’s Name From Planned $400M Contract Amid Musk Scrutiny

“I think we do need to delete entire agencies as opposed to leave a lot of them behind,” Musk said. “If we don’t remove the roots of the weed, then it’s easy for the weed to grow back.”

While Musk has spoken to the summit in the past, his appearance on Thursday comes as he has consolidated control over large swaths of the government with Trump’s blessing since assuming leadership of the Department of Government Efficiency. That’s included sidelining career officials, gaining access to sensitive databases and inviting a constitutional clash over the limits of presidential authority.

Musk’s new role imbued his comments with more weight beyond being the world’s wealthiest person through his investments in SpaceX and electric carmaker Tesla.

His remarks also offered a more-isolationist view of American power in the Middle East, where the U.S. has fought wars in both Afghanistan and Iraq since the Sept. 11, 2001, terror attacks.

“A lot of attention has been on USAID for example,” Musk said, referring to Trump’s dismantling of the U.S. Agency for International Development. “There’s like the National Endowment for Democracy. But I’m like, ‘Okay, well, how much democracy have they achieved lately?’”

Read More: Inside the Chaos, Confusion, and Heartbreak of Trump’s Foreign-Aid Freeze

He added that the U.S. under Trump is “less interested in interfering with the affairs of other countries.”

There are “times the United States has been kind of pushy in international affairs, which may resonate with some members of the audience,” Musk said, speaking to the crowd in the UAE, an autocratically ruled nation of seven sheikhdoms.

“Basically, America should mind its own business, rather than push for regime change all over the place,” he said.

He also noted the Trump administration’s focus on eliminating diversity, equity and inclusion work, at one point linking it to AI.

“If hypothetically, AI is designed for DEI, you know, diversity at all costs, it could decide that there’s too many men in power and execute them,” Musk said.

Read More: What Is DEI and What Challenges Does It Face Amid Trump’s Executive Orders?

On AI, Musk said he believed X’s newly updated AI chatbot, Grok 3, would be ready in about two weeks, calling it at one point “kind of scary.”

He criticized Sam Altman’s management of OpenAI, which Musk just led a $97.4 billion takeover bid for, describing it as akin to a nonprofit aimed at saving the Amazon rainforest becoming a “lumber company that chops down the trees.” A court filing Wednesday on Musk’s behalf in the OpenAI dispute said he’d withdraw his bid if the ChatGPT maker drops its plan to convert into a for-profit company.

Musk also announced plans for a “Dubai Loop” project in line with his work in the Boring Company—which is digging tunnels in Las Vegas to speed transit.

A later statement from Dubai’s crown prince, Sheikh Hamdan bin Mohammed Al Maktoum, said the city-state and the Boring Company “will explore the development” of a 17-kilometer (10.5-mile) underground network with 11 stations that could transport over 20,000 passengers an hour. He offered no financial terms for the deal.

“It’s going to be like a wormhole,” Musk promised. “You just wormhole from one part of the city—boom—and you’re out in another part of the city.”

Digital Access Is Critical for Society Say Industry Leaders

12 February 2025 at 22:53
World Governments Summit 2025

Improving connectivity can both benefit those who most need it most and boost the businesses that provide the service. That’s the case telecom industry leaders made during a panel on Feb. 11 at the World Governments Summit in Dubai. 

[time-brightcove not-tgx=”true”]

Titled “Can we innovate our way to a more connected world?”, the panel was hosted by TIME’s Editor-in-Chief Sam Jacobs. During the course of the conversation, Margherita Della Valle, CEO of U.K.-based multinational telecom company Vodafone Group, said, “For society today, connectivity is essential. We are moving from the old divide in the world between the haves and the have-nots towards a new divide, which is between those who have access to connectivity and those who don’t.”

The International Telecommunications Union, a United Nations agency, says that around 2.6 billion people—a third of the global population—don’t have access to the internet. Della Valle noted that of those unconnected people, 300 million live in remote areas that are too far from any form of connectivity infrastructure to get online. Satellites can help to bridge the gap, says Della Valle, whose company plans to launch its commercial direct-to-smartphone satellite service later this year in Europe.

Read More: Column: How We Connected One Billion Lives Through Digital Technology

While digital access is a social issue, companies don’t need to choose between what is best for consumers and what’s best for business, Hatem Dowidar, group CEO of UAE-based telecom company e&, formerly known as Etisalat Group, said. “At the end of the day,” he said, “in our telecom part of the business, when we connect people, [they’re] customers for us, it makes revenue, and we can build on it.” He noted that part of e&’s evolution toward becoming a tech company has involved enabling customers to access fintech, cybersecurity, and cloud computing services.

Mickey Mikitani, CEO of Japanese technology conglomerate Rakuten Group, advocated for a radical transformation of the telecommunications industry, calling the existing telecoms business model “obsolete and old.” Removing barriers to entry to the telecom sector, like the cost of accessing a wireless spectrum—the range of electromagnetic frequencies used to transmit wireless communications—may benefit customers and society more broadly, he said.

The panelists also discussed how artificial intelligence can improve connectivity, as well as the role of networks in supporting the technology’s use. Mikitani noted that his company has been using AI to help it manage networks efficiently with a fraction of the staff its competitors have. Della Valle added, “AI will need strong networks,” emphasizing that countries where networks have not received sufficient investment may struggle to support the technology.

Dowidar called on attendees at the summit from governments around the world to have a dialogue with industry leaders about legislation and regulations in order to overcome the current and potential challenges. Some of those hurdles include ensuring data sovereignty and security within borders, and enabling better training of AI in languages beyond English, he noted.

“It’s very important for everyone to understand the potential that can be unleashed by technology,” Dowidar said, emphasizing the need to train workforces. “AI is going to change the world.”

Safety Takes A Backseat At Paris AI Summit, As U.S. Pushes for Less Regulation

11 February 2025 at 21:35
Attendees at the AI Action Summit in Paris, France, on Monday, Feb. 10, 2025.

Safety concerns are out, optimism is in: that was the takeaway from a major artificial intelligence summit in Paris this week, as leaders from the U.S., France, and beyond threw their weight behind the AI industry. 

Although there were divisions between major nations—the U.S. and the U.K. did not sign a final statement endorsed by 60 nations calling for an “inclusive” and “open” AI sector—the focus of the two-day meeting was markedly different from the last such gathering. Last year, in Seoul, the emphasis was on defining red-lines for the AI industry. The concern: that the technology, although holding great promise, also had the potential for great harm. 

[time-brightcove not-tgx=”true”]

But that was then. The final statement made no mention of significant AI risks nor attempts to mitigate them, while in a speech on Tuesday, U.S. Vice President J.D. Vance said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.” 

The French leader and summit host, Emmanuel Macron, also trumpeted a decidedly pro-business message—underlining just how eager nations around the world are to gain an edge in the development of new AI systems. 

Once upon a time in Bletchley 

The emphasis on boosting the AI sector and putting aside safety concerns was a far cry from the first ever global summit on AI held at Bletchley Park in the U.K. in 2023. Called the “AI Safety Summit”—the French meeting in contrast was called the “AI Action Summit”—its express goal was to thrash out a way to mitigate the risks posed by developments in the technology. 

The second global gathering, in Seoul in 2024, built on this foundation, with leaders securing voluntary safety commitments from leading AI players such as OpenAI, Google, Meta, and their counterparts in China, South Korea, and the United Arab Emirates. The 2025 summit in Paris, governments and AI companies agreed at the time, would be the place to define red-lines for AI: risk thresholds that would require mitigations at the international level.

Paris, however, went the other way. “I think this was a real belly-flop,” says Max Tegmark, an MIT professor and the president of the Future of Life Institute, a non-profit focused on mitigating AI risks. “It almost felt like they were trying to undo Bletchley.”

Anthropic, an AI company focused on safety, called the event a “missed opportunity.”

The U.K., which hosted the first AI summit, said it had declined to sign the Paris declaration because of a lack of substance. “We felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it,” said a spokesperson for Prime Minister Keir Starmer.

Racing for an edge

The shift comes against the backdrop of intensifying developments in AI. In the month or so before the 2025 Summit, OpenAI released an “agent” model that can perform research tasks at roughly the level of a competent graduate student. 

Safety researchers, meanwhile, showed for the first time that the latest generation of AI models can try to deceive their creators, and copy themselves, in an attempt to avoid modification. Many independent AI scientists now agree with the projections of the tech companies themselves: that super-human level AI may be developed within the next five years—with potentially catastrophic effects if unsolved questions in safety research aren’t addressed.

Yet such worries were pushed to the back burner as the U.S., in particular, made a forceful argument against moves to regulate the sector, with Vance saying that the Trump Administration “cannot and will not” accept foreign governments “tightening the screws on U.S. tech companies.” 

He also strongly criticized European regulations. The E.U. has the world’s most comprehensive AI law, called the AI Act, plus other laws such as the Digital Services Act, which Vance called out by name as being overly restrictive in its restrictions related to misinformation on social media. 

The new Vice President, who has a broad base of support among venture capitalists, also made clear that his political support for big tech companies did not extend to regulations that would raise barriers for new startups, thus hindering the development of innovative AI technologies. 

“To restrict [AI’s] development now would not only unfairly benefit incumbents in the space, it would mean paralysing one of the most promising technologies we have seen in generations,” Vance said. “When a massive incumbent comes to us asking for safety regulations, we ought to ask whether that safety regulation is for the benefit of our people, or whether it’s for the benefit of the incumbent.” 

And in a clear sign that concerns about AI risks are out of favor in President Trump’s Washington, he associated AI safety with a popular Republican talking point: the restriction of “free speech” by social media platforms trying to tackle harms like misinformation.

With reporting by Tharin Pillay/Paris and Harry Booth/Paris

J.D. Vance Rails Against ‘Excessive’ AI Regulation at Paris Summit

Key Speakers at the AI Action Summit in Paris

PARIS — U.S. Vice President J.D. Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing artificial intelligence industry in a rebuke to European efforts to curb AI’s risks.

The speech underscored a widening, three-way rift over the future of the technology—one that critics warn could either cement human progress for generations or set the stage for its downfall.

[time-brightcove not-tgx=”true”]

The United States, under President Donald Trump, champions a hands-off approach to fuel innovation, while Europe is tightening the reins with strict regulations to ensure safety and accountability. Meanwhile, China is rapidly expanding AI through state-backed tech giants, vying for dominance in the global race.

The U.S. was noticeably absent from an international document signed by more than 60 nations, including China, making the Trump administration an outlier in a global pledge to promote responsible AI development. The United Kingdom also declined to sign the pledge.

Read More: Inside France’s Effort to Shape the Global AI Conversation

Vance’s debut

At the summit, Vance made his first major policy speech since becoming vice president last month, framing AI as an economic turning point but cautioning that “at this moment, we face the extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.”

“But it will never come to pass if overregulation deters innovators from taking the risks necessary to advance the ball,” Vance added.

The 40-year-old vice president, leveraging the AI summit and a security conference in Munich later this week, is seeking to project Trump’s forceful new style of diplomacy.

The Trump administration will “ensure that AI systems developed in America are free from ideological bias,” Vance said and pledged the U.S. would “never restrict our citizens’ right to free speech.”

A global AI pledge—and the U.S. absence

The international document, signed by scores of countries, including European nations, pledged to “promote AI accessibility to reduce digital divides” and “ensure AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy.” It also called for “making AI sustainable for people and the planet” and protecting “human rights, gender equality, linguistic diversity, consumer rights, and intellectual property.”

In a surprise move, China—long criticized for its human rights record—signed the declaration, further widening the distance between America and the rest in the tussle for AI supremacy.

The UK also declined to sign despite agreeing with much of the declaration because it “didn’t provide enough practical clarity on global governance,” said Tom Wells, a spokesman for Prime Minister Keir Starmer.

“We didn’t feel it sufficiently addressed broader questions around national security and the challenge that AI poses to it,” Wells said.

He insisted: “This is not about the U.S. This is about our own national interest, ensuring the balance between opportunity and security.”

A growing divide

Vance also took aim at foreign governments for “tightening the screws” on U.S. tech firms, saying such moves were troubling. His remarks underscored the growing divide between Washington and its European allies on AI governance.

The agreement comes as the E.U. enforces its AI Act, the world’s first comprehensive AI law, which took effect in August 2024.

European Commission President Ursula von der Leyen stressed that, “AI needs the confidence of the people and has to be safe″ and detailed E.U. guidelines intended to standardize the bloc’s AI Act but acknowledged concerns over regulatory burden.

“At the same time, I know that we have to make it easier and we have to cut red tape and we will,” she added.

She also announced that the “InvestAI” initiative had reached a total of €200 billion in AI investments across Europe, including €20 billion dedicated to AI gigafactories.

A race for AI dominance

The summit laid bare a global power struggle over AI—Europe wants strict rules and public funding, China is expanding state-backed AI, and the U.S. is going all-in on a free-market approach.

French President Emmanuel Macron pitched Europe as a “third way”—a middle ground that regulates AI without smothering innovation or relying too much on the U.S. or China.

“We want fair and open access to these innovations for the whole planet,” he said, calling for global AI rules. He also announced fresh investments across Europe to boost the region’s AI standing. “We’re in the race,” he declared.

China, meanwhile, is playing both sides: pushing for control at home while promoting open-source AI abroad.

Chinese Vice Premier Zhang Guoqing, speaking for President Xi Jinping, said Beijing wants to help set global AI rules. At the same time, Chinese officials slammed Western limits on AI access, and China’s DeepSeek chatbot has already triggered security concerns in the U.S. China argues open-source AI will benefit everyone, but critics see it as a way to spread Beijing’s influence.

With China and the U.S. in an AI arms race, Washington is also clashing with Europe.

Vance, a vocal critic of European tech rules, has floated the idea of the U.S. rethinking NATO commitments if Europe cracks down on Elon Musk’s social media platform, X. His Paris visit also included talks on Ukraine, AI’s growing role in global power shifts, and U.S.-China tensions.

How to regulate AI?

Concerns over AI’s potential dangers have loomed over the summit, particularly as nations grapple with how to regulate a technology that is increasingly entwined with defense and warfare.

“I think one day we will have to find ways to control AI or else we will lose control of everything,” said Admiral Pierre Vandier, NATO’s commander who oversees the alliance’s modernization efforts.

Beyond diplomatic tensions, a global public-private partnership is being launched called “Current AI,” aimed at supporting large-scale AI initiatives for the public good.

Analysts see this as an opportunity to counterbalance the dominance of private companies in AI development. However, it remains unclear whether the U.S. will support such efforts.

Separately, a high-stakes battle over AI power is escalating in the private sector.

A group of investors led by Musk—who now heads Trump’s Department of Government Efficiency—has made a $97.4 billion bid to acquire the nonprofit behind OpenAI. OpenAI CEO Sam Altman, attending the Paris summit, said it is “not for sale.”

Pressed on AI regulation, Altman also dismissed the need for further restrictions in Europe. But the head of San Francisco-based Anthropic, an OpenAI competitor, described the summit as a “missed opportunity” to more fully address the urgent global challenges posed by the technology.

“The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit,” said Anthropic CEO Dario Amodei in a written statement.

—AP writers Sylvie Corbet and Kelvin Chan in Paris contributed to this report.

How Google Appears to Be Adapting Its Products to the Trump Presidency

11 February 2025 at 09:00
Google Logo

Google was among the tech companies that donated $1 million to Donald Trump’s 2025 inauguration. It also, like many other companies, pulled back on its internal diversity hiring policies in response to the Trump Administration’s anti-DEI crackdown. And in early February, Google dropped its pledge not to use AI for weapons or surveillance, a move seen as paving the way for closer cooperation with Trump’s government.

[time-brightcove not-tgx=”true”]

Now, users of Google’s consumer products are noticing that a number of updates have been made—seemingly in response to the new administration—to everyday tools like Maps, Calendar, and Search.

Here’s what to know.

Google Maps renames Gulf of Mexico to Gulf of America

Among Trump’s first executive orders was a directive to rename the Gulf of Mexico to Gulf of America and Alaska’s Denali, the highest mountain peak in North America, to its former name Mt. McKinley. Google announced on Jan. 27 that it would “quickly” update its maps accordingly, as soon as the federal Geographic Names Information System (GNIS) is updated. On Monday, Feb. 10, following changes around the same time by the Storm Prediction Center and Federal Aviation Administration, Google announced that, in line with its longstanding convention on naming disputed regions, U.S. based users would now see “Gulf of America,” Mexican users will continue to see “Gulf of Mexico,” while users elsewhere will see “Gulf of Mexico (Gulf of America).”

As of Tuesday, Feb. 11, alternatives Apple Maps and OpenStreetMap still show “Gulf of Mexico.”

Google Calendar removes Pride, Black History Month, and other cultural holidays

Last week, some users noticed that Google removed certain default markers from its calendar, including Pride (June), Black History Month (February), Indigenous Peoples Month (November), and Hispanic Heritage Month (mid-September to mid-October). “Dear Google. Stop sucking up to Trump,” reads one comment on a Google Support forum about the noticed changes.

A Google spokesperson confirmed the removal of some holidays and observances to The Verge but said that such changes began in 2024 because “maintaining hundreds of moments manually and consistently globally wasn’t scalable or sustainable,” explaining that Google Calendar now defers to public holidays and national observances globally listed on timeanddate.com. But not everyone is buying the explanation: “These are lies by Google in order to please the American dictator,” wrote a commenter on another Google Support forum about the changes.

Google Search prohibits autocomplete for ‘impeach Trump’

Earlier this month, social media users also noticed that Google Search no longer suggests an autocomplete for “impeach Trump” when the beginning of the query is typed in the search box, Snopes reported. A Google spokesperson told the fact-checking site that the autocomplete suggestion was removed because the company’s “policies prohibit autocomplete predictions that could be interpreted as a position for or against a political figure. In this case, some predictions were appearing that shouldn’t have been, and we’re taking action to block them.” Google also recently removed predictions for “impeach Biden,” “impeach Clinton,” and others, the spokesperson added, though search results don’t appear to be altered.

How Elon Musk’s Anti-Government Crusade Could Benefit Tesla and His Other Businesses

The Inauguration Of Donald J. Trump As The 47th President

WASHINGTON — Elon Musk has long railed against the U.S. government, saying a crushing number of federal investigations and safety programs have stymied Tesla, his electric car company, and its efforts to create fleets of robotaxis and other self-driving automobiles.

Now, Musk’s close relationship with President Donald Trump means many of those federal headaches could vanish within weeks or months.

[time-brightcove not-tgx=”true”]

On the potential chopping block: crash investigations into Tesla’s partially automated vehicles; a Justice Department criminal probe examining whether Musk and Tesla have overstated their cars’ self-driving capabilities; and a government mandate to report crash data on vehicles using technology like Tesla’s Autopilot.

The consequences of such actions could prove dire, say safety advocates who credit the federal investigations and recalls with saving lives.

“Musk wants to run the Department of Transportation,” said Missy Cummings, a former senior safety adviser at the National Highway Traffic Safety Administration. “I’ve lost count of the number of investigations that are underway with Tesla. They will all be gone.”

Within days of Trump taking office, the White House and Musk began waging an unbridled war against the federal government—freezing spending and programs while sacking a host of career employees, including prosecutors and government watchdogs typically shielded from such brazen dismissals without cause.

The actions have sparked outcries from legal scholars who say the Trump administration’s actions are without modern-day precedent and are already upending the balance of power in Washington.

The Trump administration has not yet declared any actions that could benefit Tesla or Musk’s other companies. However, snuffing out federal investigations or jettisoning safety initiatives would be an easier task than their assault on regulators and the bureaucracy.

Investigations into companies like Tesla can be shut down overnight by the new leaders of agencies. And safety programs created through an agency order or initiative—not by laws passed by Congress or adopted through a formal regulatory process—can also be quickly dissolved by new leaders. Unlike many of the dismantling efforts that Trump and Musk have launched in recent weeks, stalling or killing such probes and programs would not be subject to legal challenges.

As such, the temporal and fragile nature of the federal probes and safety programs make them easy targets for those seeking to weaken government oversight and upend long-established norms.

“Trump’s election, and the bromance between Trump and Musk, will essentially lead to the defanging of a regulatory environment that’s been stifling Tesla,” said Daniel Ives, a veteran Wall Street technology and automobile industry analyst.

Musk’s empire

Among Musk’s businesses, the federal government’s power over Tesla to investigate, order recalls, and mandate crash data reporting is perhaps the most wide-ranging. However, the ways the Trump administration could quickly ease up on Tesla also apply in some measure to other companies in Musk’s sprawling business empire.

A host of Musk’s other businesses—such as his aerospace company SpaceX and his social media company X—are subjects of federal investigations.

Musk’s businesses are also intertwined with the federal government, pocketing hundreds of millions of dollars each year in contracts. SpaceX, for example, has secured nearly $20 billion in federal funds since 2008 to ferry astronauts and satellites into space. Tesla, meanwhile, has received $41.9 million from the U.S. government, including payment for vehicles provided to some U.S. embassies.

Musk, Tesla’s billionaire CEO, has found himself in his newly influential position by enthusiastically backing Trump’s third bid for the White House. He was the largest donor to the campaign, plunging more than $270 million of his vast fortune into Trump’s political apparatus, most of it during the final months of the heated presidential race.

Those donations and his efforts during the campaign—including the transformation of his social media platform X into a firehose of pro-Trump commentary—have been rewarded by Trump, who has tapped the entrepreneur to oversee efforts to slash government regulations and spending.

Read More: Inside Elon Musk’s War on Washington

As the head of the Department of Government Efficiency, Musk operates out of an office in the Eisenhower Executive Office Building, where most White House staff work and from where he has launched his assault on the federal government. Musk’s power under DOGE is being challenged in the courts.

Even before Trump took office, there were signs that Musk’s vast influence with the new administration was registering with the public—and paying dividends for Tesla.

Tesla’s stock surged more than 60% by December. Since then, its stock price has dropped, but still remains 40% higher than it was before Trump’s election.

“For Musk,” said Ives, the technology analyst, “betting on Trump is a poker move for the ages.”

Proposed actions will help Tesla

The White House did not respond to questions about how it would handle investigations and government oversight involving Tesla or other Musk companies. A spokesman for the transition team said last month that the White House would ensure that DOGE and “those involved with it are compliant with all legal guidelines and conflicts of interest.”

In the weeks before Trump took office on Jan. 20, the president-elect’s transition team recommended changes that would benefit the billionaire and his car company, including scrapping the federal order requiring carmakers to report crash data involving self-driving and partially automated technology.

The action would be a boon for Tesla, which has reported a vast majority of the crashes that triggered a series of investigations and recalls.

The transition team also recommended shelving a $7,500 consumer tax credit for electric vehicle purchases, something Musk has publicly called for.

“Take away the subsidies. It will only help Tesla,” Musk wrote in a post on X as he campaigned and raised money for Trump in July.

Auto industry experts say the move would have a nominal impact on Tesla—by far the largest electric vehicle maker in the U.S.—but have a potentially devastating impact on its competitors in the EV sector since they are still struggling to secure a foothold in the market.

Musk did not respond to requests for comment. Before the election, he posted a message on X, saying he had never asked Trump “for any favors, nor has he offered me any.”

Although most of the changes that Musk might seek for Tesla could unfold quickly, there is one long-term goal that could impact the autonomous vehicle industry for decades to come.

Though nearly 30 states have rules that specifically govern self-driving cars, the federal government has yet to craft such regulations.

During a late October call with Tesla investors, as Musk was pouring hundreds of millions of dollars into Trump’s campaign, he signaled support for having the federal government create these rules.

“There should be a federal approval process for autonomous vehicles,” Musk said on the call. “If there’s a department of government efficiency, I’ll try to help make that happen.”

Musk leads that very organization.

Those affected by Tesla crashes worry about lax oversight

People whose lives have been forever changed by Tesla crashes fear that dangerous and fatal accidents may increase if the federal government’s investigative and recall powers are restricted.

They say they worry that the company may otherwise never be held accountable for its failures, like the one that took the life of 22-year-old Naibel Benavides Leon.

The college student was on a date with her boyfriend, gazing at the stars on the side of a rural Florida road, when they were struck by an out-of-control Tesla driving on Autopilot—a system that allows Tesla cars to operate without driver input. The car had blown through a stop sign, a flashing light and five yellow warning signs, according to dashcam video and a police report.

Benavides Leon died at the scene; her boyfriend, Dillon Angulo, suffered injuries but survived. A federal investigation determined that Autopilot in Teslas at this time was faulty and needed repairs.

“We, as a family, have never been the same,” said Benavides Leon’s sister, Neima. “I’m an engineer, and everything that we design and we build has to be by important codes and regulations. This technology cannot be an exception.”

“It has to be investigated when it fails,” she added. “Because it does fail.”

Tesla’s lawyers did not respond to requests for comment. In a statement on Twitter in December 2023, Tesla pointed to an earlier lawsuit the Benavides Leon’s family had brought against the driver who struck the college student. He testified that despite using Autopilot, “I was highly aware that it was still my responsibility to operate the vehicle safely.”

Tesla also said the driver “was pressing the accelerator to maintain 60 mph,” an action that effectively overrode Autopilot, which would have otherwise restricted the speed to 45 mph on the rural route, something Benavides Leon’s attorney disputes.

Federal probes into Tesla

The federal agency that has the most power over Tesla—and the entire automobile industry—is the National Highway Traffic Safety Administration, which is part of the Department of Transportation.

NHTSA sets automobile safety standards that must be met before vehicles can enter the marketplace. It also has a quasi-law enforcement arm, the Office of Defects Investigation, which has the power to launch probes into crashes and seek recalls for safety defects.

The agency has six pending investigations into Tesla’s self-driving technology, prompted by dozens of crashes that took place when the computerized systems were in use.

Other federal agencies are also investigating Musk and Tesla, and all of those probes could be sidelined by Musk-friendly officials:

—The Securities and Exchange Commission and Justice Department are separately investigating whether Musk and Tesla overstated the autonomous capabilities of their vehicles, creating dangerous situations in which drivers may over rely on the car’s technology.

—The Justice Department is also probing whether Tesla misled customers about how far its electric vehicles can travel before needing a charge.

—The National Labor Relations Board is weighing 12 unfair labor practice allegations leveled by workers at Tesla plants.

—The Equal Employment Opportunity Commission is asking a federal judge to force Tesla to enact reforms and pay compensatory and punitive damages and backpay to Black employees who say they were subjected to racist attacks. In a federal lawsuit, the agency has alleged that supervisors and other employees at Tesla’s plant in Fremont, California, routinely hurled racist insults at Black employees.

Experts said most, if not all, of those investigations could be shut down, especially at the Justice Department where Trump has long shown a willingness to meddle in the department’s affairs. The Trump administration has already ordered the firing of dozens of prosecutors who handled the criminal cases from the Jan. 6, 2021 attack on the Capitol.

“DOJ is not going to be prosecuting Elon Musk,” said Peter Zeidenberg, a former Assistant U.S. Attorney in the Justice Department’s public integrity section who served during the Clinton and George H.W. Bush administrations. “I’d expect that any investigations that were ongoing will be ground to an abrupt end.”

Trump has also taken steps to gain control of the NLRB and EEOC. Last month, he fired Democratic members of the board and commission, breaking with decades of precedent. One member has sued, and two others are exploring legal options.

Tesla and Musk have denied wrongdoing in all those investigations and are fighting the probes.

The small safety agency in Musk’s crosshairs

The federal agency that appears to have enjoyed the most success in changing Tesla’s behavior is NHTSA, an organization of about 750 staffers that has forced the company to hand over crash data and cooperate in its investigations and requested recalls.

“NHTSA has been a thorn in Musk’s side for over the last decade, and he’s grappled with almost every three-letter agency in the Beltway,” said Ives, the Wall Street analyst who covers the technology sector and automobile industry. “That’s all created what looks to be a really big soap opera in 2025.”

Musk has repeatedly blamed the federal government for impeding Tesla’s progress and creating negative publicity with recalls of his cars after its self-driving technology malfunctions or crashes.

“The word ‘recall’ should be recalled,” Musk posted on Twitter (now X) in 2014. Two years ago, he posted, “The word ‘recall’ for an over-the-air software update is anachronistic and just flat wrong!”

Michael Brooks, executive director of the Center for Auto Safety, a non-profit consumer advocacy group, said some investigations might continue under Trump, but a recall is less likely to happen if a defect is found.

As with most car companies, Tesla’s recalls have so far been voluntary. The threat of public hearings about a defect that precedes a NHTSA-ordered recall has generally prompted car companies to act on their own.

That threat could be easily stripped away by the new NHTSA administrator, who will be a Trump appointee.

“If there isn’t a threat of recall, will Tesla do them?” Brooks said. “Unfortunately, this is where politics seeps in.”

NHTSA conducting several probes of Tesla

Among the active NHTSA investigations, several are examining fundamental aspects of Tesla’s partially automated driving systems that were in use when dozens of crashes occurred.

An investigation of Tesla’s “Full Self-Driving” system started in October after Tesla reported four crashes to NHTSA in which the vehicles had trouble navigating through sun glare, fog and airborne dust. In one of the accidents, an Arizona woman was killed after stopping on a freeway to help someone involved in another crash.

Under pressure from NHTSA, Tesla has twice recalled the “Full Self-Driving” feature for software updates. The technology—the most advanced of Tesla’s Autopilot systems—is supposed to allow drivers to travel from point to point with little human intervention. But repeated malfunctions led NHTSA to recently launch a new inquiry that includes a crash in July that killed a motorcyclist near Seattle.

NHTSA announced its latest investigation in January into “Actually Smart Summon,” a Tesla technology that allows drivers to remotely move a car, after the agency learned of four incidents from a driver and several media reports.

The agency said that in each collision, the vehicles were using the system that Tesla pushed out in a September software update that was “failing to detect posts or parked vehicles, resulting in a crash.” NHTSA also criticized Tesla for failing to notify the agency of those accidents.

NHTSA is also conducting a probe into whether a 2023 recall of Autopilot, the most basic of Tesla’s partially automated driver assistance systems, was effective.

That recall was supposed to boost the number of controls and alerts to keep drivers engaged; it had been prompted by an earlier NHTSA investigation that identified hundreds of crashes involving Autopilot that resulted in scores of injuries and more than a dozen deaths.

In a letter to Tesla in April, agency investigators noted that crashes involving Autopilot continue and that they could not observe a difference between warnings issued to drivers before or after the new software had been installed.

Critics have said that Teslas don’t have proper sensors to be fully self-driving. Nearly all other companies working on autonomous vehicles use radar and laser sensors in addition to cameras to see better in the dark or in poor visibility conditions. Tesla, on the other hand, relies only on cameras to spot hazards.

Musk has said that human drivers rely on their eyesight, so autonomous cars should be able to also get by with just cameras. He has called technology that relies on radar and light detection to discern objects a “fool’s errand.”

Bryant Walker Smith, a Stanford Law School scholar and a leading automated driving expert, said Musk’s contention that the federal government is holding him back is not accurate. The problem, Smith said, is that Tesla’s autonomous vehicles cannot perform as advertised.

“Blaming the federal government for holding them back, it provides a convenient, if dubious, scapegoat for the lack of an actual automated driving system that works,” Smith said.

Smith and other autonomous vehicle experts say Musk has felt pressure to provide Tesla shareholders with excuses for repeated delays in rolling out its futuristic cars. The financial stake is enormous, which Musk acknowledged during a 2022 interview. He said the development of a fully self-driving vehicle was “really the difference between Tesla being worth a lot of money and being worth basically zero.”

The collisions from Tesla’s malfunctioning technology on its vehicles have led not only to deaths but also catastrophic injuries that have forever altered people’s lives.

Attorneys representing people injured in Tesla crashes—or who represent surviving family members of those who died—say without NHTSA, the only other way to hold the car company accountable is through civil lawsuits.

“When government can’t do it, then the civil justice system is left to pick up the slack,” said Brett Schreiber, whose law firm is handling four Tesla cases.

However, Schreiber and other lawyers say if the federal government’s investigative powers don’t remain intact, Tesla may also not be held accountable in court.

In the pending wrongful death lawsuit that Neima Benavides Leon filed against Tesla after her sister’s death, her attorney told a Miami district judge the lawsuit would have likely been dropped if NHTSA hadn’t investigated and found defects with the Autopilot system.

“All along we were hoping that the NHTSA investigation would produce what it did, in fact, end up producing, which is a finding of product defect and a recall,” attorney Doug Eaton said during a March court hearing. “And we had told you very early on in the case if NHTSA had not found that, we may very well drop the case. But they did, in fact, find this.”

Elon Musk Leads Group Seeking to Buy OpenAI. Sam Altman Says ‘No Thank You’

11 February 2025 at 02:00
The logo of 'OpenAI' is displayed on a mobile phone screen in front of a computer screen displaying the photographs of Elon Musk and Sam Altman in Ankara, Turkiye on March 14, 2024.

A group of investors led by Elon Musk is offering about $97.4 billion to buy the nonprofit behind OpenAI, escalating a dispute with the artificial intelligence company that Musk helped found a decade ago.

[time-brightcove not-tgx=”true”]

Musk and his own AI startup, xAI, and a consortium of investment firms want to take control of the ChatGPT maker and revert it to its original charitable mission as a nonprofit research lab, according to Musk’s attorney Marc Toberoff.

OpenAI CEO Sam Altman quickly rejected the unsolicited bid on Musk’s social platform X, saying, “no thank you but we will buy Twitter for $9.74 billion if you want.”

Musk bought Twitter, now called X, for $44 billion in 2022.

Musk and Altman, who together helped start OpenAI in 2015 and later competed over who should lead it, have been in a long-running feud over the startup’s direction since Musk resigned from its board in 2018.

Musk, an early OpenAI investor and board member, sued the company last year, first in a California state court and later in federal court, alleging it had betrayed its founding aims as a nonprofit research lab that would benefit the public good by safely building better-than-human AI. Musk had invested about $45 million in the startup from its founding until 2018, Toberoff has said.

The sudden success of ChatGPT two years ago brought worldwide fame and a new revenue stream to OpenAI and also heightened the internal battles over the future of the organization and the advanced AI it was trying to develop. Its nonprofit board fired Altman in late 2023. He came back days later with a new board.

Now a fast-growing business still controlled by a nonprofit board bound to its original mission, OpenAI last year announced plans to formally change its corporate structure. But such changes are complicated. Tax law requires money or assets donated to a tax-exempt organization to remain within the charitable sector.

If the initial organization becomes a for-profit, generally, a conversion is needed where the for-profit pays the fair market value of the assets to another charitable organization. Even if the nonprofit OpenAI continues to exist in some way, some experts argue it would have to be paid fair market value for any assets that get transferred to its for-profit subsidiaries.

Lawyers for OpenAI and Musk faced off in a California federal court last week as a judge weighed Musk’s request for a court order that would block the ChatGPT maker from converting itself to a for-profit company.

U.S. District Judge Yvonne Gonzalez Rogers hasn’t yet ruled on Musk’s request but in the courtroom said it was a “stretch” for Musk to claim he will be irreparably harmed if she doesn’t intervene to stop OpenAI from moving forward with its planned transition.

But the judge also raised concerns about OpenAI and its relationship with business partner Microsoft and said she wouldn’t stop the case from moving to trial as soon as next year so a jury can decide.

“It is plausible that what Mr. Musk is saying is true. We’ll find out. He’ll sit on the stand,” she said.

Along with Musk and xAI, others backing the bid announced Monday include Baron Capital Group, Valor Management, Atreides Management, Vy Fund, Emanuel Capital Management and Eight Partners VC.

Toberoff said in a statement that if Altman and OpenAI’s current board “are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time.”

Musk’s attorney also shared a letter he sent in early January to the attorneys general of California, where OpenAI operates, and Delaware, where it is incorporated.

Since both state offices must “ensure any such transactional process relating to OpenAI’s charitable assets provides at least fair market value to protect the public’s beneficial interest, we assume you will provide a process for competitive bidding to actually determine that fair market value,” Toberoff wrote, asking for more information on the terms and timing of that bidding process.

OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.

Refik Anadol Sees Artistic Possibilities in Data

10 February 2025 at 22:51

To Refik Anadol, data is a creative force.

“For as long as I can remember, I have imagined data as more than just information—I have seen it as a living, breathing material, a pigment with infinite possibilities,” the Turkish-American artist said on Monday during his acceptance speech at the TIME100 AI Impact Awards in Dubai.

Anadol was one of four leaders shaping the future of AI to be recognized at TIME’s fourth-annual Impact Awards ceremony in the city. California Institute of Technology professor Anima Anandkumar, musician Grimes, and Arvind Krishna, the CEO, chairman, and president of IBM, also accepted awards as a part of the night’s festivities, which featured a performance by Emirati soul singer Arqam Al Abri.

[time-brightcove not-tgx=”true”]
[video id=VTGjWYNl autostart="viewable"]

Anadol has spent over a decade showing the world that art can come from anywhere—even machines. As a media artist and the director and co-founder of Refik Anadol Studio, he has used AI to pioneer new forms of creativity, producing data paintings and data sculptures in tandem with the technology. 

“Over the past decade, my journey with AI has been a relentless pursuit of collaboration between humans and machines, between memory and imagination, between technology and nature,” he said in his speech. 

This year, Anadol and his team will open “Dataland,” the world’s first AI art museum, in Los Angeles—an achievement no doubt informed by years spent producing dozens of other works that have been shown across the world.

It’s all part of his plan to make art that challenges the limits of creativity. “Art, in my vision, has never been confined to a single culture, place, or audience,” Anadol said. “It belongs to everyone.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Anima Anandkumar Highlights AI’s Potential to Solve ‘Hard Scientific Challenges’

10 February 2025 at 22:39

Anima Anandkumar is using AI to help solve the world’s challenges faster. She has used the technology to speed up prediction models in an effort to get ahead of extreme weather, and to work on sustainable nuclear fusion simulations so as to one day safely harness the energy source.

Accepting a TIME100 AI Impact Award in Dubai on Monday, Anandkumar—a professor at California Institute of Technology who was previously the senior director of AI research at Nvidia—credited her engineer parents with setting an example for her. “Having a mom who is an engineer was just such a great role model right at home.” Her parents, who brought computerized manufacturing to her hometown in India, opened up her world, she said. 

[time-brightcove not-tgx=”true”]

“Growing up as a young girl, I didn’t think of computer programs as something that merely resided within a computer, but [as something] that touched the physical world and produced these beautiful and precise metal parts,” said Anandkumar. “As I pursued AI research over the last two decades, this memory continued to inspire me to connect the physical and digital worlds together.”

[video id=SsHE7kuF autostart="viewable"]

Neural operators—a type of AI framework that can learn across multiple scales—are key to Anandkumar’s efforts. Using neural operators, Anandkumar and her collaborators are able to build systems “with universal physical understanding that can simulate any physical process, generate novel engineering designs that were previously out of reach, and make new scientific discoveries,” she said. 

Speaking about her work in 2022 with an interdisciplinary team from Nvidia, Caltech, and other academic institutions, she noted, “I am proud of our work in weather forecasting where, using neural operators, we built the first AI-based high-resolution weather model called FourCastNet.” This model is tens of thousands of times faster than traditional weather models and often more accurate than existing systems when predicting extreme events, such as heat waves and hurricanes, she said.

“Neural operators are helping us get closer to solving hard scientific challenges,” she said. After outlining some of the technology’s other possible uses, including designing better drones, rockets, sustainable nuclear reactors, and medical devices, Anandkumar added, “To me, this is just the beginning.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Arvind Krishna Celebrates the Work of a Pioneer at the TIME100 AI Impact Awards

10 February 2025 at 22:33

Arvind Krishna, CEO, chairman and president of IBM, used his acceptance speech at the TIME100 AI Impact Awards on Monday to acknowledge pioneering computer scientist and mathematician Claude Shannon, calling him one of the “unsung heroes of today.”

Krishna, who accepted his award at a ceremony in Dubai alongside musician Grimes, California Institute of Technology professor Anima Anandkumar, and artist Refik Anadol, said of Shannon, “He would come up with the ways that you can convey information, all of which has stood the test until today.” 

[time-brightcove not-tgx=”true”]

In 1948, Shannon—now known as the father of the information age—published “A Mathematical Theory of Communication,” a transformative paper that, by proposing a simplified way of quantifying information via bits, would go on to fundamentally shape the development of information technology—and thus, our modern era. In his speech, Krishna also pointed to Shannon’s work building robotic mice that solved mazes as an example of his enjoyment of play within his research.

[video id=RwzSZqCE autostart="viewable"]

Krishna, of course, has some familiarity with what it takes to be at the cutting edge. Under his leadership, IBM, known as a pioneer in artificial intelligence itself, is carving its own niche in specialized AI and invests heavily in quantum computing research—the mission to build a machine based on quantum principles, which could carry out calculations much faster than existing computers. The business also runs a cloud computing service, designs software, and operates a consulting business.

Krishna said that he most enjoyed Shannon’s work because the researcher’s “simple insights” have helped contribute to the “most sophisticated communication systems” of today, including satellites. Speaking about Shannon’s theoretical work, which Krishna said was a precursor to neural networks, he noted, “I think we can give him credit for building the first elements of artificial intelligence.”

The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.

Inside France’s Effort to Shape the Global AI Conversation

6 February 2025 at 15:20
French President's Special Envoy on AI, Anne Bouverot, prepares for the AI Action Summit at the Quai d'Orsay in Paris.

One evening early last year, Anne Bouverot was putting the finishing touches on a report when she received an urgent phone call. It was one of French President Emmanuel Macron’s aides offering her the role as his special envoy on artificial intelligence. The unpaid position would entail leading the preparations for the France AI Action Summit—a gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AI’s future. Set to take place on Feb. 10 and 11 at the presidential Élysée Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in May—and the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.’s inaugural AI Safety Summit. After weighing the offer, Bouverot, who was at the time the co-chair of France’s AI Commission, accepted. 

[time-brightcove not-tgx=”true”]

But France’s Summit won’t be like the others. While the U.K.’s Summit centered on mitigating catastrophic risks—such as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human control—France has rebranded the event as the ‘AI Action Summit,’ shifting the conversation towards a wider gamut of risks—including the disruption of the labor market and the technology’s environmental impact—while also keeping the opportunities front and center. “We’re broadening the conversation, compared to Bletchley Park,” Bouverot says. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.

Some welcome the pivot as a much-needed correction to what they see as hype and hysteria around the technology’s dangers. Others, including some of the world’s foremost AI scientists—including some who helped develop the field’s fundamental technologies—worry that safety concerns are being sidelined. “The view within the community of people concerned about safety is that it’s been downgraded,” says Stuart Russell, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and the co-author of the authoritative textbook on AI used at over 1,500 universities.

“On the face of it, it looks like the downgrading of safety is an attempt to say, ‘we want to charge ahead, we’re not going to over-regulate. We’re not going to put any obligations on companies if they want to do business in France,”‘ Russell says.

France’s Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI’s benefits globally. But if the recent leaps in AI capabilities—and emerging signs of deceptive behavior—are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.


Bouverot is no stranger to the politics of emerging technology. In the early 2010s, she held the director general position at the Global System for Mobile Communications Association, an industry body that promotes interoperable standards among cellular providers globally. “In a nutshell, that role—which was really telecommunications—was also diplomacy,” she says. From there, she took the helm at Morpho (now IDEMIA), steering the French facial recognition and biometrics firm until its 2017 acquisition. She later co-founded the Fondation Abeona, a nonprofit that promotes “responsible AI.” Her work there led to her appointment as co-chair of France’s AI Commission, where she developed a strategy for how the nation could establish itself as a global leader in AI.

Bouverot’s growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale supérieure—a top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. “This is how much of a crystal ball I had,” she joked on Washington AI Network’s podcast in December, acknowledging the irony of her early skepticism, given AI’s impact today. 

Under Bouverot’s leadership, safety will remain a feature, but rather than the summit’s sole focus, it is now one of five core themes. Others include: AI’s use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Paris’s agenda does not mention the possibility that an AI system could escape human control. “There’s no evidence of that risk today,” Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. “There was a bit of a science fiction moment,” she says, adding that the global discourse has since shifted. 

Back in late 2023, as the U.K.’s summit approached, signs of a shift in the conversation around AI’s risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it “a waste of time” and a “doom-obsessed mess.” Researchers, who had studied AI’s downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk “was really problematic.”

“Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. “I think a lot of the existential harm rhetoric doesn’t translate to what policy makers can specifically do now,” she adds.

On the U.K. Summit’s opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: “When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him?” she asked, in an effort to highlight the near-term risks of AI over the summit’s focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. “We didn’t make that change in the global discourse,” Bouverot says, adding that the focus is now squarely on the technology’s tangible impacts. “We’re quite happy that this is actually the conversation that people are having now.”


One of the actions expected to emerge from France’s Summit is a new yet-to-be-named foundation that will aim to ensure AI’s benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind’s AI model that predicts protein structures with unprecedented precision—potentially accelerating research and drug discovery—as an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. “We need to enable more databases like this,” Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AI’s development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says.

Her second priority is creating an informal “Coalition for Sustainable AI.” AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI’s environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. “Clearly AI is happening and being developed. We want it to be developed in a sustainable way,” Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative

Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AI’s climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AI’s emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPU—the kind of computer chips used for running AI—energy usage. “Nobody has all of the numbers,” she says, but the coalition may help put the pieces together.


Given AI’s recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGI—a system that surpasses humans in most regards—could potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quickly—if ever—we’ll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAI’s Altman wrote in a blog post: “We are now confident we know how to build AGI.” Speaking on a panel at Davos last month, Dario Amodei, the CEO of rival AI company, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year. 

Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” More recently, Altman has downplayed AI’s risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the company’s direction. “Over the past years, safety culture and processes have taken a backseat to shiny products,” former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanity’s control.

Predictions of AGI’s imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; by December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently.

This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAI’s recent o1 and a version of Anthropic’s Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caught—often doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropic’s model, Claude, strategically misleading its creators during training. “Exactly the things that people have been writing about for the last 10 years are happening now,” Russell says. “As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they’re doing it, and so on.”


Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three “Godfathers of AI” for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. “There’s very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction,” Bengio says. “A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss-of-control risk,” he adds.

At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. “They did a lot to move the needle in the right direction,” Bengio says, but he adds that these measures are not close to sufficient. “In my personal opinion, the magnitude of the potential transformations that are likely to happen once we approach AGI are so radical,” Bengio says, “that my impression is most people, most governments, underestimate this a whole lot.”

But rather than pushing for new pledges, in Paris the focus will be streamlining existing ones—making them compatible with existing regulatory frameworks and each other. “There’s already quite a lot of commitments for AI companies,” Bouverot says. This light-touch stance mirrors France’s broader AI strategy, where homegrown company Mistral AI has emerged as Europe’s leading challenger in the field. Both Mistral and the French government lobbied for softer regulations under the E.U.’s comprehensive AI Act. France’s Summit will feature a business-focused event, hosted across town at Station F, France’s largest start-up hub. “To me, it looks a lot like they’re trying to use it to be a French industry fair,” says Andrea Miotti, the executive director of Control AI, a non-profit that advocates for guarding against existential risks from AI. “They’re taking a summit that was focused on safety and turning it away. In the rhetoric, it’s very much like: let’s stop talking about the risks and start talking about the great innovation that we can do.” 

The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair France’s Summit. In March, India issued an advisory that pushed companies to obtain the government’s permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In California—home to many of the top AI developers—a landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of California’s legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Biden’s sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation” to secure U.S. leadership over the technology.

Markus Anderljung, director of policy and research at AI safety think-tank the Centre for the Governance of AI, says that safety could be woven into the France Summit’s broader goals. For instance, initiatives to distribute AI’s benefits globally might be linked to commitments from recipient countries to uphold safety best practices. He says he would like to see the list of signatories of the Frontier AI Safety Commitments signed in Seoul expanded —particularly in China, where only one company, Zhipu, has signed. But Anderljung says that for the commitments to succeed, accountability mechanisms must also be strengthened. “Commitments without follow-ups might just be empty words,” he says, ”they just don’t matter unless you know what was committed to actually gets done.”

A focus on AI’s extreme risks does not have to come at the exclusion of other important issues. “I know that the organizers of the French summit care a lot about [AI’s] positive impact on the global majority,” Bengio says. “That’s a very important mission that I embrace completely.” But he argues the potential severity of loss-of-control risks warrant invoking the precautionary principle—the idea that we should take preventive measures, even absent scientific consensus. It’s a principle that has been invoked by U.N. declarations aimed at protecting the environment, and in sensitive scientific domains like human cloning.

But for Bouverot, it is a question of balancing competing demands. “We don’t want to solve everything—we can’t, nobody can,” she says, adding that the focus is on making AI more concrete. “We want to work from the level of scientific consensus, whatever level of consensus is reached.”


In mid December, in France’s foreign ministry, Bouverot, faced an unusual dilemma. Across the table, a South Korean official explained his country’s eagerness to join the summit. But days earlier, South Korea’s political leadership was thrown into turmoil when President Yoon Suk Yeol, who co-chaired the previous summit’s leaders’ session, declared martial law before being swiftly impeached, leaving the question of who will represent the country—and whether officials could attend at all—up in the air. 

There is a great deal of uncertainty—not only over the pace AI will advance, but to what degree governments will be willing to engage. France’s own government collapsed in early December after Prime Minister Michel Barnier was ousted in a no-confidence vote, marking the first such collapse since the 1960s. And, as Trump, long skeptical of international institutions, returns to the oval office, it is yet to be seen how Vice President Vance will approach the Paris meeting.

When reflecting on the technology’s uncertain future, Bouverot finds wisdom in the words of another French pioneer who grappled with powerful but nascent technology. “I have this quote from Marie Curie, which I really love,” Bouverot says. Curie, the first woman to win a Nobel Prize, revolutionized science with her work on radioactivity. She once wrote: “Nothing in life is to be feared, it is only to be understood.” Curie’s work ultimately cost her life—she died at a relatively young 66 from a rare blood disorder, likely caused by prolonged radiation exposure.

Elise Smith Defends DEI as Good Business

6 February 2025 at 12:04
Elise Smith

In recent years, right-leaning leaders in politics and tech like Donald Trump and Elon Musk have attacked the value of DEI (diversity, equity, and inclusion) initiatives. But for Elise Smith, the CEO and co-founder of the tech startup Praxis Labs, learning to navigate cultural differences is simply good business, especially for ambitious multinational companies with employees and clients around the world. “Regardless of what you think about the term DEI, this work will continue, because fundamentally it does drive better business outcomes,” says Smith, 34. “Fortune 500 companies are trying to figure out: How do we serve our clients and customers, knowing that there’s a ton of diversity within them? How do we bring our teams together to do their best work?” 

[time-brightcove not-tgx=”true”]

Praxis creates interactive AI tools that allow business leaders to practice and improve their workplace communication and better interact with employees. These tools are something like the next-generation iterations of corporate diversity training videos, with many modules specifically designed to help managers give feedback to underperformers, navigate divisive topics like bias, and ask better questions. Users interact with a generative AI chatbot that simulates high-pressure work scenarios, such as performance reviews or interpersonal disagreements. The chatbot then provides personalized guidance on how one might better handle situations, especially with regard to cultural sensitivities. While it is currently confined to a specific set of scenarios, Smith hopes the chatbot will receive an upgrade this year that allows it to be “always-on” and freely give advice about workplace concerns.

“You can’t play basketball by just watching a video in theory about passing and shooting—you have to do it,” Smith says. “Learning these critical human skills is very similar. You have to do it in a simulated, experiential way that will truly translate to your ability in the moment when it matters.” 

Smith cut her teeth at IBM’s Watson Group in the early 2010s, strategizing how to apply the AI technology powering that early supercomputer toward education. Inspired by that experience as well as watching her parents navigate systems that weren’t set up for them, she founded Praxis alongside Heather Shen in 2018. (Shen was named to Forbes’ 30 Under 30 list this year.) Praxis has now raised $23 million worth of venture capital and has a staff of around 15 people, and its client list includes Uber, Amazon, and Accenture. The goal, Smith says, is to help these companies to improve employee engagement, retention, and global business relationships. 

Smith believes that in a world in which AI tools are growing increasingly powerful in performing mechanical tasks, soft skills like clear communication, emotional intelligence, and the ability to defuse conflict are more important than ever. “We have to connect at a real, personal level, beyond the transactional trust that I think we so often find in workplaces,” she says. “We are so divided, and yet we have to learn to work with people who think differently than us and believe in different things than us, to achieve outcomes that hopefully better all of us.”

Correction, February 6

The original version of this story misstated which types of tools Praxis builds. The company creates AI tools but no longer creates VR tools.

Exclusive: The British Public Wants Stricter AI Rules Than Its Government Does

6 February 2025 at 09:00
Prime Minister Keir Starmer Gives Speech On AI Opportunities Action Plan

Even as Silicon Valley races to build more powerful artificial intelligence models, public opinion on the other side of the Atlantic remains decidedly skeptical of the influence of tech CEOs when it comes to regulating the sector, with the vast majority of Britons worried about the safety of new AI systems.

[time-brightcove not-tgx=”true”]

The concerns, highlighted in a new poll shared exclusively with TIME, come as world leaders and tech bosses—from U.S. Vice President JD Vance, France’s Emmanuel Macron and India’s Narendra Modi to OpenAI chief Sam Altman and Google’s Sundar Pichai—prepare to gather in Paris next week to discuss the rapid pace of developments in AI.

The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of “smarter-than-human” AI models. Just 9%, meanwhile, said they trust tech CEOs to act in the public interest when discussing AI regulation. The survey was conducted by the British pollster YouGov on behalf of Control AI, a non-profit focused on AI risks.

The results reflect growing public anxieties about the development of AI systems that could match or even outdo humans at most tasks. Such technology does not currently exist, but creating it is the express goal of major AI companies such as OpenAI, Google, Anthropic, and Meta, the owner of Facebook and Instagram. In fact, several tech CEOs expect such systems to become a reality in a matter of years, if not sooner. It is against this backdrop that 75% of the Britons polled told YouGov that laws should explicitly prohibit the development of AI systems that can escape their environments. More than half (63%) agreed with the idea of prohibiting the creation of AI systems that can make themselves smarter or more powerful.

The findings of the British poll mirror the results of recent U.S. surveys, and point to a growing gap between public opinion and regulatory action when it comes to advanced AI. Even the European Union’s AI Act – widely seen as the world’s most comprehensive AI legislation and which began to come into force this month – stops short of directly addressing many of the possible risks posed by AI systems that meet or surpass human abilities.

In Britain, where the YouGov survey of 2,344 adults was conducted over Jan. 16-17, there remains no comprehensive regulatory framework for AI. While the ruling Labour Party had pledged to introduce new AI rules ahead of the last general election in 2024, since coming to power it has dragged its feet by repeatedly delaying the introduction of an AI bill as it grapples with the challenge of restoring growth to its struggling economy. In January, for example, British Prime Minister Keir Starmer announced that AI would be “mainlined into the veins” of the nation to boost growth—a clear shift away from talk of regulation.

“It seems like they’re sidelining their promises at the moment, for the shiny attraction of growth,” says Andrea Miotti, the executive director of Control AI. “But the thing is, the British public is very clear about what they want. They want these promises to be met.”

A New Push for New Laws 

The polling was accompanied by a statement, signed by 16 British lawmakers from both major political parties, calling on the government to introduce new AI laws targeted specifically at “superintelligent” AI systems, or those that could become far smarter than humans.

“Specialised AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would [by contrast] compromise national and global security,” the statement reads. “The U.K. can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.”

Miotti, from Control AI, says that the U.K. does not have to sacrifice growth by imposing  sweeping regulations such as those contained in the E.U. AI Act. Indeed, many in the industry blame the AI Act and other sweeping E.U. laws for stymying the growth of the European tech sector. Instead, Miotti argues, the U.K. could impose “narrow, targeted, surgical AI regulation” that only applies to the most powerful models posing what he sees as the biggest risks.

“What the public wants is systems that help them, not systems that replace them,” Miotti says. “We should not pursue [superintelligent systems] until we know how to prove that they’re safe.”
The polling data also shows that a large majority (74%) of Brits support a pledge made by the Labour Party ahead of the last election to enshrine the U.K.’s AI Safety Institute (AISI) into law, giving it power to act as a regulator. Currently, the AISI – an arm of the U.K. government – carries out tests on private AI models ahead of their release, but has no authority to compel tech companies to make changes or to rule that models are too dangerous to be release

Google Scraps Hiring Targets After Trump’s Anti-DEI Pressure on Government Contractors

6 February 2025 at 02:00
A view of Google Headquarters in Mountain View, Calif., United States on Aug. 22, 2024.

SAN FRANCISCO — Google is scrapping some of its diversity hiring targets, joining a lengthening list of U.S. companies that have abandoned or scaled back their diversity, equity and inclusion programs.

The move, which was outlined in an email sent to Google employees on Wednesday, came in the wake of an executive order issued by President Donald Trump that was aimed in part at pressuring government contractors to scrap their DEI initiatives.

[time-brightcove not-tgx=”true”]

Read More: What Is DEI and What Challenges Does It Face Amid Trump’s Executive Orders?

Like several other major tech companies, Google sells some of its technology and services to the federal government, including its rapidly growing cloud division that’s a key piece of its push into artificial technology.

Google’s parent company, Alphabet, also signaled the shift in its annual 10-K report it filed this week with the Securities and Exchange Commission. In it, Google removed a line included in previous annual reports saying that it’s “committed to making diversity, equity, and inclusion part of everything we do and to growing a workforce that is representative of the users we serve.”

Google generates most of Alphabet’s annual revenue of $350 billion and accounts for almost all of its worldwide workforce of 183,000.

“We’re committed to creating a workplace where all our employees can succeed and have equal opportunities, and over the last year we’ve been reviewing our programs designed to help us get there,” Google said in a statement to The Associated Press. “We’ve updated our 10-K language to reflect this, and as a federal contractor, our teams are also evaluating changes required following recent court decisions and executive orders on this topic.”

The change in language also comes slightly more than two weeks after Google CEO Sundar Pichai and other prominent technology executives—including Tesla CEO Elon Musk, Amazon founder Jeff Bezos, Apple CEO Tim Cook and Meta Platforms CEO Mark Zuckerberg—stood behind Trump during his inauguration.

Meta jettisoned its DEI program last month, shortly before the inauguration, while Amazon halted some of its DEI programs in December following Trump’s election.

Many companies outside of the technology industry also have backed away from DEI. Those include Walt Disney Co., McDonald’s, Ford, Walmart, Target, Lowe’s and John Deere.

Trump’s recent executive order threatens to impose financial sanctions on federal contractors deemed to have “illegal” DEI programs. If the companies are found to be in violation, they could be subject to massive damages under the 1863 False Claims Act. That law states that contractors that make false claims to the government could be liable for three times the government’s damages.

The order also directed all federal agencies to choose the targets of up to nine investigations of publicly traded companies, large non-profits and other institutions with DEI policies that constitute “Illegal discrimination or preference.”

The challenge for companies is knowing which DEI policies the Trump administration may decide are “illegal.” Trump’s executive order seeks to “terminate all discriminatory and illegal preferences, mandates, policies, programs” and other activities of the federal government, and to compel federal agencies “to combat illegal private-sector DEI preferences, mandates, policies, programs, and activities.”

In both the public and private sector, diversity initiatives have covered a range of practices, from anti-discrimination training and conducting pay equity studies to making efforts to recruit more members of minority groups and women as employees.

Google, which is based in Mountain View, California, has tried to hire more people from underrepresented groups for more than a decade but stepped up those efforts in 2020 after the police killing of George Floyd in Minneapolis triggered an outcry for more social justice.

Shortly after Floyd died, Pichai set a goal to increase the representation of underrepresented groups in the Mountain View, California, company’s largely Asian and white leadership ranks by 30% by 2025. Google has made some headway since then, but the makeup of its leadership has not changed dramatically.

The representation of Black people in the company’s leadership ranks rose from 2.6% in 2020 to 5.1% last year, according to Google’s annual diversity report. For Hispanic people, the change was 3.7% to 4.3%. The share of women in leadership roles, meanwhile, increased from 26.7% in 2020 to 32.8% in 2024, according to the company’s report.

The numbers aren’t much different in Google’s overall workforce, with Black employees comprising just 5.7% and Latino employees 7.5%. Two-thirds of Google’s worldwide workforce is made up of men, according to the diversity report.

—Associated Press business reporter Alexandra Olson contributed to this report.

Elon Musk Creates Confusion About Direct File, the IRS’ Free Tax-Prep Program

Tesla, SpaceX CEO and X owner Elon Musk gestures while speaking during an inauguration event at Capital One Arena in Washington, D.C., on Jan. 20, 2025.

WASHINGTON — Billionaire tech mogul Elon Musk posted Monday on his social media site that he had “deleted” 18F, a government agency that worked on technology projects such as the IRS’ Direct File program. This led to some confusion about whether Direct File is still available to taxpayers, but the free filing program is still available, at least for the coming tax season.

[time-brightcove not-tgx=”true”]

While Musk’s tweet may have intimated that the group of workers had been eliminated, an individual with knowledge of the IRS workforce said the Direct File program was still accepting tax returns. The individual spoke anonymously with The Associated Press because they were not authorized to talk to the press.

Read More: Trump and Musk Have All of Washington on Edge—Just Like They Wanted

As of Monday evening, 18F’s website was still operational, as was the Direct File website. But the digital services agency’s X account was deleted.

The IRS announced last year that it will make the free electronic tax return filing system permanent and asked all 50 states and the District of Columbia to help taxpayers file their returns through the program in 2025.

The Direct File trial began in March 2024. But the IRS has face intense blowback to Direct File from private tax preparation companies that have made billions from charging people to use their software and have spent millions lobbying Congress. The average American typically spends about $140 preparing their returns each year.

Commercial tax prep companies that have lobbied against development of the free file program say free file options already exist.

Several organizations, including private tax firms, offer free online tax preparation assistance to taxpayers under certain income limits. Fillable forms are available online on the IRS website, but they are complicated and taxpayers still have to calculate their tax liability.

Last May the IRS announced it would make the Direct File program permanent. It is now available in 25 states, up from 12 states that were part of last year’s pilot program.

The program allows people in some states with very simple W-2s to calculate and submit their returns directly to the IRS. Those using the pilot program in 2024 claimed more than $90 million in refunds, the IRS said in October.

During his confirmation hearing Jan. 16, Scott Bessent, now treasury secretary, committed to maintaining the Direct File program at least for the 2025 tax season, which began Jan. 27.

Musk was responding to a post by an X user who called 18F “far left” and mused that Direct File “puts the government in charge” of preparing people’s taxes.

“That group has been deleted,” Musk wrote.

What Can the ‘Black Box’ Tell Us About Plane Crashes?

31 January 2025 at 21:43
CORRECTION Aircraft Down

It’s one of the most important pieces of forensic evidence following a plane crash: The so-called “black box.”

There are actually two of these remarkably sturdy devices: the cockpit voice recorder and the flight data recorder. And they’re typically orange, not black.

[time-brightcove not-tgx=”true”]

Federal investigators on Friday recovered the black boxes from the passenger jet that crashed in the Potomac River just outside Washington on Wednesday, while authorities were still searching for similar devices in the military helicopter that also went down. The collision killed 67 people in the deadliest U.S. aviation disaster since 2001.

Here is an explanation of what black boxes are and what they can do:

What are black boxes?

The cockpit voice recorder and the flight data recorder are tools that help investigators reconstruct the events that lead up to a plane crash.

They’re orange in color to make them easier to find in wreckage, sometimes at great ocean depths. They’re usually installed a plane’s tail section, which is considered the most survivable part of the aircraft, according to the National Transportation Safety Board’s website.

They’re also equipped with beacons that activate when immersed in water and can transmit from depths of 14,000 feet (4,267 meters). While the battery that powers the beacon will run down after about one month, there’s no definitive shelf-life for the data itself, NTSB investigators told The Associated Press in 2014.

For example, black boxes of an Air France flight that crashed in the Atlantic Ocean in 2009 were found two years later from a depth of more than 10,000 feet, and technicians were able to recover most of the information.

If a black box has been submerged in seawater, technicians will keep them submerged in fresh water to wash away the corrosive salt. If water seeps in, the devices must be carefully dried for hours or even days using a vacuum oven to prevent memory chips from cracking.

The electronics and memory are checked, and any necessary repairs made. Chips are scrutinized under a microscope.

What does the cockpit voice recorder do?

The cockpit voice recorder collects radio transmissions and sounds such as the pilot’s voices and engine noises, according to the NTSB’s website.

Depending on what happened, investigators may pay close attention to the engine noise, stall warnings and other clicks and pops, the NTSB said. And from those sounds, investigators can often determine engine speed and the failure of some systems.

Investigators are also listening to conversations between the pilots and crew and communications with air traffic control. Experts make a meticulous transcript of the voice recording, which can take up to a week.

What does the flight data recorder do?

The flight data recorder monitors a plane’s altitude, airspeed and heading, according to the NTSB. Those factors are among at least 88 parameters that newly built planes must monitor.

Some can collect the status of more than 1,000 other characteristics, from a wing’s flap position to the smoke alarms. The NTSB said it can generate a computer animated video reconstruction of the flight from the information collected.

NTBS investigators told the AP in 2014 that a flight data recorder carries 25 hours of information, including prior flights within that time span, which can sometimes provide hints about the cause of a mechanical failure on a later flight. An initial assessment of the data is provided to investigators within 24 hours, but analysis will continue for weeks more.

What are the origins of the black box?

At least two people have been credited with creating devices that record what happens on an airplane.

One is French aviation engineer François Hussenot. In the 1930s, he found a way to record a plane’s speed, altitude and other parameters onto photographic film, according to the website for European plane-maker Airbus.

In the 1950s, Australian scientist David Warren came up with the idea for the cockpit voice recorder, according to his 2010 AP obituary.

Warren had been investigating the crash of the world’s first commercial jet airliner, the Comet, in 1953, and thought it would be helpful for airline accident investigators to have a recording of voices in the cockpit, the Australian Department of Defence said in a statement after his death.

Warren designed and constructed a prototype in 1956. But it took several years before officials understood just how valuable the device could be and began installing them in commercial airlines worldwide. Warren’s father had been killed in a plane crash in Australia in 1934.

Why the name “black box”?

Some have suggested that it stems from Hussenot’s device because it used film and “ran continuously in a light-tight box, hence the name ‘black box,’” according to Airbus, which noted that orange was the box’s chosen color from the beginning to make it easy to find.

Other theories include the boxes turning black when they get charred in a crash, the Smithsonian Magazine wrote in 2019.

“The truth is much more mundane,” the magazine wrote. “In the post-World War II field of electronic circuitry, black box became the ubiquitous term for a self-contained electronic device whose input and output were more defining than its internal operations.”

The media continues to use the term, the magazine wrote, “because of the sense of mystery it conveys in the aftermath of an air disaster.”

Is the DeepSeek Panic Overblown?

30 January 2025 at 19:56
DeepSeek AI

This week, leaders across Silicon Valley, Washington D.C., Wall Street, and beyond have been thrown into disarray due to the unexpected rise of the Chinese AI company DeepSeek. DeepSeek recently released AI models that rivaled OpenAI’s, seemingly for a fraction of the price, and despite American policy designed to slow China’s progress. As a result, many analysts concluded that DeepSeek’s success undermined the core beliefs driving the American AI industry—and that the companies leading this charge, like Nvidia and Microsoft, were not as valuable or technologically ahead as previously believed. Tech stocks dropped hundreds of billions of dollars in days. 

[time-brightcove not-tgx=”true”]

But AI scientists have pushed back, arguing that many of those fears are exaggerated. They say that while DeepSeek does represent a genuine advancement in AI efficiency, it is not a massive technological breakthrough—and that the American AI industry still has key advantages over China’s.

“It’s not a leap forward on AI frontier capabilities,” says Lennart Heim, an AI researcher at RAND. “I think the market just got it wrong.”

Read More: What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

Here are several claims being widely circulated about DeepSeek’s implications, and why scientists say they’re incomplete or outright wrong. 

Claim: DeepSeek is much cheaper than other models. 

In December, DeepSeek reported that its V3 model cost just $6 million to train. This figure seemed startlingly low compared to the more than $100 million that OpenAI said it spent training GPT-4, or the “few tens of millions” that Anthropic spent training a recent version of its Claude model.

DeepSeek’s lower price tag was thanks to some big efficiency gains that the company’s researchers described in a paper accompanying their model’s release. But were those gains so large as to be unexpected? Heim argues no: that machine learning algorithms have always gotten cheaper over time. Dario Amodei, the CEO of AI company Anthropic, made the same point in an essay published Jan. 28, writing that while the efficiency gains by DeepSeek’s researchers were impressive, they were not a “unique breakthrough or something that fundamentally changes the economics of LLM’s.” “It’s an expected point on an ongoing cost reduction curve,” he wrote. “What’s different this time is that the company that was first to demonstrate the expected cost reductions was Chinese.”

To further obscure the picture, DeepSeek may also not be being entirely honest about its expenses. In the wake of claims about the low cost of training its models, tech CEOs cited reports that DeepSeek actually had a stash of 50,000 Nvidia chips, which it could not talk about due to U.S. export controls. Those chips would cost somewhere in the region of $1 billion.

It is, however, true that DeepSeek’s new R1 model is far cheaper for users to access than its competitor model OpenAI o1, with its model access fees around 30 times lower ($2.19 per million “tokens,” or segments of words outputted, versus $60). That sparked worries among some investors of a looming price war in the American AI industry, which could reduce expected returns on investment and make it more difficult for U.S. companies to raise funds required to build new data centers to fuel their AI models.

Oliver Stephenson, associate director of AI and emerging tech policy at the Federation of American Scientists, says that people shouldn’t draw conclusions from this price point. “While DeepSeek has made genuine efficiency gains, their pricing could be an attention-grabbing strategy,” he says. “They could be making a loss on inference.” (Inference is the running of an already-formed AI system.)

On Monday, Jan. 27, DeepSeek said that it was targeted by a cyberattack and was limiting new registrations for users outside of China. 

Claim: DeepSeek shows that export controls aren’t working. 

When the AI arms race heated up in 2022, the Biden Administration moved to cut off China’s access to cutting edge chips, most notably Nvidia’s H100s. As a result, Nvidia created an inferior chip, the H800, to legally sell to Chinese companies. The Biden Administration later opted to ban the sale of those chips to China, too. But by the time those extra controls went into effect a year later, Chinese companies had stockpiled thousands of H800s, generating a massive windfall for Nvidia. 

DeepSeek said its V3 model was built using the H800, which performs adequately for the type of model that the company is creating. But despite this success, experts argue that the chip controls may have stopped China from progressing even further. “In an environment where China had access to more compute, we would expect even more breakthroughs,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. “The export controls might be working, but that does not mean that China will not still be able to build more and more powerful models.”

Read more: AI Could Reshape Everything We Know About Climate Change

And going forward, it may become increasingly challenging for DeepSeek and other Chinese companies to keep pace with frontier models given their chip constraints. While OpenAI’s GP4 trained on the order of 10,000 H100s, the next generation of models will likely require ten times or a hundred times that amount. Even if China is able to build formidable models thanks to efficiency gains, export controls will likely bottleneck their ability to deploy their models to a wide userbase. “If we think in the future that an AI agent will do somebody’s job, then how many digital workers you have is a function of how much compute you have,” Heim says. “If an AI model can’t be used that much, this limits its impact on the world.” 

Claim: Deepseek shows that high-end chips aren’t as valuable as people thought.

As DeepSeek hype mounted this week, many investors concluded that its accomplishments threatened Nvidia’s AI dominance—and sold off shares of a company that was, in January, the most valuable in the world. As a result, Nvidia’s stock price dropped 17% and lost nearly $600 billion in value on Monday, based on the idea that their chips would be less valuable under this new paradigm. 

But many AI experts argued that this drop in Nvidia’s stock price was the market acting irrationally. Many of them rushed to “buy the dip,” resulting in the stock recapturing some of its lost value. Advances in the efficiency of computing power, they noted, have historically led to more demand for chips, not less. As tech stocks fell, Satya Nadella, the CEO of Microsoft, posted a link on X to the Wikipedia page of the Jevons Paradox, first observed in the 19th century, named after an economist who noted that as coal burning became more efficient, people actually used more coal, because it had become cheaper and more widely available.

Experts believe that a similar dynamic will play out in the race to create advanced AI. “What we’re seeing is an impressive technical breakthrough built on top of Nvidia’s product that gets better as you use more of Nvidia’s product,” Stephenson says. “That does not seem like a situation in which you’re going to see less demand for Nvidia’s product.” 

Two days after his inauguration, President Donald Trump announced a $500 billion joint public-private venture to build out AI data centers, driven by the idea that scale is essential to build the most powerful AI systems. DeepSeek’s rise, however, led many to argue that this approach was misguided or wasteful. 

But some AI scientists disagree. “DeepSeek shows AI is getting better, and it’s not stopping,” Heim says. “It has massive implications for economic impact if AI is getting used, and therefore such investments make sense.” 

American leadership has signaled that DeepSeek has made them even more ravenous to build out AI infrastructure in order to maintain the country’s lead. Trump, in a press conference on Monday, said that DeepSeek “should be a wake-up call for our industries that we need to be laser-focused on competing to win.”

However, Stephenson cautions that this data center buildout will come with a “huge number of negative externalities.” Data centers often use a vast amount of power, coincide with massive hikes in local electricity bills, and threaten water supply, he says, adding: “We’re going to face a lot of problems in doing these infrastructure buildups.”  

Why AI Safety Researchers Are Worried About DeepSeek

29 January 2025 at 17:07
Multicolored data

The release of DeepSeek R1 stunned Wall Street and Silicon Valley this month, spooking investors and impressing tech leaders. But amid all the talk, many overlooked a critical detail about the way the new Chinese AI model functions—a nuance that has researchers worried about humanity’s ability to control sophisticated new artificial intelligence systems.

It’s all down to an innovation in how DeepSeek R1 was trained—one that led to surprising behaviors in an early version of the model, which researchers described in the technical documentation accompanying its release.

[time-brightcove not-tgx=”true”]

During testing, researchers noticed that the model would spontaneously switch between English and Chinese while it was solving problems. When they forced it to stick to one language, thus making it easier for users to follow along, they found that the system’s ability to solve the same problems would diminish.

That finding rang alarm bells for some AI safety researchers. Currently, the most capable AI systems “think” in human-legible languages, writing out their reasoning before coming to a conclusion. That has been a boon for safety teams, whose most effective guardrails involve monitoring models’ so-called “chains of thought” for signs of dangerous behaviors. But DeepSeek’s results raised the possibility of a decoupling on the horizon: one where new AI capabilities could be gained from freeing models of the constraints of human language altogether.

To be sure, DeepSeek’s language switching is not by itself cause for alarm. Instead, what worries researchers is the new innovation that caused it. The DeepSeek paper describes a novel training method whereby the model was rewarded purely for getting correct answers, regardless of how comprehensible its thinking process was to humans. The worry is that this incentive-based approach could eventually lead AI systems to develop completely inscrutable ways of reasoning, maybe even creating their own non-human languages, if doing so proves to be more effective.

Were the AI industry to proceed in that direction—seeking more powerful systems by giving up on legibility—“it would take away what was looking like it could have been an easy win” for AI safety, says Sam Bowman, the leader of a research department at Anthropic, an AI company, focused on “aligning” AI to human preferences. “We would be forfeiting an ability that we might otherwise have had to keep an eye on them.”

Read More: What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

Thinking without words

An AI creating its own alien language is not as outlandish as it may sound.

Last December, Meta researchers set out to test the hypothesis that human language wasn’t the optimal format for carrying out reasoning—and that large language models (or LLMs, the AI systems that underpin OpenAI’s ChatGPT and DeepSeek’s R1) might be able to reason more efficiently and accurately if they were unhobbled by that linguistic constraint.

The Meta researchers went on to design a model that, instead of carrying out its reasoning in words, did so using a series of numbers that represented the most recent patterns inside its neural network—essentially its internal reasoning engine. This model, they discovered, began to generate what they called “continuous thoughts”—essentially numbers encoding multiple potential reasoning paths simultaneously. The numbers were completely opaque and inscrutable to human eyes. But this strategy, they found, created “emergent advanced reasoning patterns” in the model. Those patterns led to higher scores on some logical reasoning tasks, compared to models that reasoned using human language.

Though the Meta research project was very different to DeepSeek’s, its findings dovetailed with the Chinese research in one crucial way. 

Both DeepSeek and Meta showed that “human legibility imposes a tax” on the performance of AI systems, according to Jeremie Harris, the CEO of Gladstone AI, a firm that advises the U.S. government on AI safety challenges. “In the limit, there’s no reason that [an AI’s thought process] should look human legible at all,” Harris says.

And this possibility has some safety experts concerned. 

“It seems like the writing is on the wall that there is this other avenue available [for AI research], where you just optimize for the best reasoning you can get,” says Bowman, the Anthropic safety team leader. “I expect people will scale this work up. And the risk is, we wind up with models where we’re not able to say with confidence that we know what they’re trying to do, what their values are, or how they would make hard decisions when we set them up as agents.”

For their part, the Meta researchers argued that their research need not result in humans being relegated to the sidelines. “It would be ideal for LLMs to have the freedom to reason without any language constraints, and then translate their findings into language only when necessary,” they wrote in their paper. (Meta did not respond to a request for comment on the suggestion that the research could lead in a dangerous direction.)

Read More: Why DeepSeek Is Sparking Debates Over National Security, Just Like TikTok

The limits of language

Of course, even human-legible AI reasoning isn’t without its problems. 

When AI systems explain their thinking in plain English, it might look like they’re faithfully showing their work. But some experts aren’t sure if these explanations actually reveal how the AI really makes decisions. It could be like asking a politician for the motivations behind a policy—they might come up with an explanation that sounds good, but has little connection to the real decision-making process.

While having AI explain itself in human terms isn’t perfect, many researchers think it’s better than the alternative: letting AI develop its own mysterious internal language that we can’t understand. Scientists are working on other ways to peek inside AI systems, similar to how doctors use brain scans to study human thinking. But these methods are still new, and haven’t yet given us reliable ways to make AI systems safer.

So, many researchers remain skeptical of efforts to encourage AI to reason in ways other than human language. 

“If we don’t pursue this path, I think we’ll be in a much better position for safety,” Bowman says. “If we do, we will have taken away what, right now, seems like our best point of leverage on some very scary open problems in alignment that we have not yet solved.”

❌
❌