Normal view

There are new articles available, click to refresh the page.
Yesterday — 30 January 2025Tech – TIME

Is the DeepSeek Panic Overblown?

30 January 2025 at 19:56
DeepSeek AI

This week, leaders across Silicon Valley, Washington D.C., Wall Street, and beyond have been thrown into disarray due to the unexpected rise of the Chinese AI company DeepSeek. DeepSeek recently released AI models that rivaled OpenAI’s, seemingly for a fraction of the price, and despite American policy designed to slow China’s progress. As a result, many analysts concluded that DeepSeek’s success undermined the core beliefs driving the American AI industry—and that the companies leading this charge, like Nvidia and Microsoft, were not as valuable or technologically ahead as previously believed. Tech stocks dropped hundreds of billions of dollars in days. 

[time-brightcove not-tgx=”true”]

But AI scientists have pushed back, arguing that many of those fears are exaggerated. They say that while DeepSeek does represent a genuine advancement in AI efficiency, it is not a massive technological breakthrough—and that the American AI industry still has key advantages over China’s.

“It’s not a leap forward on AI frontier capabilities,” says Lennart Heim, an AI researcher at RAND. “I think the market just got it wrong.”

Read More: What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

Here are several claims being widely circulated about DeepSeek’s implications, and why scientists say they’re incomplete or outright wrong. 

Claim: DeepSeek is much cheaper than other models. 

In December, DeepSeek reported that its V3 model cost just $6 million to train. This figure seemed startlingly low compared to the more than $100 million that OpenAI said it spent training GPT-4, or the “few tens of millions” that Anthropic spent training a recent version of its Claude model.

DeepSeek’s lower price tag was thanks to some big efficiency gains that the company’s researchers described in a paper accompanying their model’s release. But were those gains so large as to be unexpected? Heim argues no: that machine learning algorithms have always gotten cheaper over time. Dario Amodei, the CEO of AI company Anthropic, made the same point in an essay published Jan. 28, writing that while the efficiency gains by DeepSeek’s researchers were impressive, they were not a “unique breakthrough or something that fundamentally changes the economics of LLM’s.” “It’s an expected point on an ongoing cost reduction curve,” he wrote. “What’s different this time is that the company that was first to demonstrate the expected cost reductions was Chinese.”

To further obscure the picture, DeepSeek may also not be being entirely honest about its expenses. In the wake of claims about the low cost of training its models, tech CEOs cited reports that DeepSeek actually had a stash of 50,000 Nvidia chips, which it could not talk about due to U.S. export controls. Those chips would cost somewhere in the region of $1 billion.

It is, however, true that DeepSeek’s new R1 model is far cheaper for users to access than its competitor model OpenAI o1, with its model access fees around 30 times lower ($2.19 per million “tokens,” or segments of words outputted, versus $60). That sparked worries among some investors of a looming price war in the American AI industry, which could reduce expected returns on investment and make it more difficult for U.S. companies to raise funds required to build new data centers to fuel their AI models.

Oliver Stephenson, associate director of AI and emerging tech policy at the Federation of American Scientists, says that people shouldn’t draw conclusions from this price point. “While DeepSeek has made genuine efficiency gains, their pricing could be an attention-grabbing strategy,” he says. “They could be making a loss on inference.” (Inference is the running of an already-formed AI system.)

On Monday, Jan. 27, DeepSeek said that it was targeted by a cyberattack and was limiting new registrations for users outside of China. 

Claim: DeepSeek shows that export controls aren’t working. 

When the AI arms race heated up in 2022, the Biden Administration moved to cut off China’s access to cutting edge chips, most notably Nvidia’s H100s. As a result, Nvidia created an inferior chip, the H800, to legally sell to Chinese companies. The Biden Administration later opted to ban the sale of those chips to China, too. But by the time those extra controls went into effect a year later, Chinese companies had stockpiled thousands of H800s, generating a massive windfall for Nvidia. 

DeepSeek said its V3 model was built using the H800, which performs adequately for the type of model that the company is creating. But despite this success, experts argue that the chip controls may have stopped China from progressing even further. “In an environment where China had access to more compute, we would expect even more breakthroughs,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. “The export controls might be working, but that does not mean that China will not still be able to build more and more powerful models.”

Read more: AI Could Reshape Everything We Know About Climate Change

And going forward, it may become increasingly challenging for DeepSeek and other Chinese companies to keep pace with frontier models given their chip constraints. While OpenAI’s GP4 trained on the order of 10,000 H100s, the next generation of models will likely require ten times or a hundred times that amount. Even if China is able to build formidable models thanks to efficiency gains, export controls will likely bottleneck their ability to deploy their models to a wide userbase. “If we think in the future that an AI agent will do somebody’s job, then how many digital workers you have is a function of how much compute you have,” Heim says. “If an AI model can’t be used that much, this limits its impact on the world.” 

Claim: Deepseek shows that high-end chips aren’t as valuable as people thought.

As DeepSeek hype mounted this week, many investors concluded that its accomplishments threatened Nvidia’s AI dominance—and sold off shares of a company that was, in January, the most valuable in the world. As a result, Nvidia’s stock price dropped 17% and lost nearly $600 billion in value on Monday, based on the idea that their chips would be less valuable under this new paradigm. 

But many AI experts argued that this drop in Nvidia’s stock price was the market acting irrationally. Many of them rushed to “buy the dip,” resulting in the stock recapturing some of its lost value. Advances in the efficiency of computing power, they noted, have historically led to more demand for chips, not less. As tech stocks fell, Satya Nadella, the CEO of Microsoft, posted a link on X to the Wikipedia page of the Jevons Paradox, first observed in the 19th century, named after an economist who noted that as coal burning became more efficient, people actually used more coal, because it had become cheaper and more widely available.

Experts believe that a similar dynamic will play out in the race to create advanced AI. “What we’re seeing is an impressive technical breakthrough built on top of Nvidia’s product that gets better as you use more of Nvidia’s product,” Stephenson says. “That does not seem like a situation in which you’re going to see less demand for Nvidia’s product.” 

Two days after his inauguration, President Donald Trump announced a $500 billion joint public-private venture to build out AI data centers, driven by the idea that scale is essential to build the most powerful AI systems. DeepSeek’s rise, however, led many to argue that this approach was misguided or wasteful. 

But some AI scientists disagree. “DeepSeek shows AI is getting better, and it’s not stopping,” Heim says. “It has massive implications for economic impact if AI is getting used, and therefore such investments make sense.” 

American leadership has signaled that DeepSeek has made them even more ravenous to build out AI infrastructure in order to maintain the country’s lead. Trump, in a press conference on Monday, said that DeepSeek “should be a wake-up call for our industries that we need to be laser-focused on competing to win.”

However, Stephenson cautions that this data center buildout will come with a “huge number of negative externalities.” Data centers often use a vast amount of power, coincide with massive hikes in local electricity bills, and threaten water supply, he says, adding: “We’re going to face a lot of problems in doing these infrastructure buildups.”  

Before yesterdayTech – TIME

Why AI Safety Researchers Are Worried About DeepSeek

29 January 2025 at 17:07
Multicolored data

The release of DeepSeek R1 stunned Wall Street and Silicon Valley this month, spooking investors and impressing tech leaders. But amid all the talk, many overlooked a critical detail about the way the new Chinese AI model functions—a nuance that has researchers worried about humanity’s ability to control sophisticated new artificial intelligence systems.

It’s all down to an innovation in how DeepSeek R1 was trained—one that led to surprising behaviors in an early version of the model, which researchers described in the technical documentation accompanying its release.

[time-brightcove not-tgx=”true”]

During testing, researchers noticed that the model would spontaneously switch between English and Chinese while it was solving problems. When they forced it to stick to one language, thus making it easier for users to follow along, they found that the system’s ability to solve the same problems would diminish.

That finding rang alarm bells for some AI safety researchers. Currently, the most capable AI systems “think” in human-legible languages, writing out their reasoning before coming to a conclusion. That has been a boon for safety teams, whose most effective guardrails involve monitoring models’ so-called “chains of thought” for signs of dangerous behaviors. But DeepSeek’s results raised the possibility of a decoupling on the horizon: one where new AI capabilities could be gained from freeing models of the constraints of human language altogether.

To be sure, DeepSeek’s language switching is not by itself cause for alarm. Instead, what worries researchers is the new innovation that caused it. The DeepSeek paper describes a novel training method whereby the model was rewarded purely for getting correct answers, regardless of how comprehensible its thinking process was to humans. The worry is that this incentive-based approach could eventually lead AI systems to develop completely inscrutable ways of reasoning, maybe even creating their own non-human languages, if doing so proves to be more effective.

Were the AI industry to proceed in that direction—seeking more powerful systems by giving up on legibility—“it would take away what was looking like it could have been an easy win” for AI safety, says Sam Bowman, the leader of a research department at Anthropic, an AI company, focused on “aligning” AI to human preferences. “We would be forfeiting an ability that we might otherwise have had to keep an eye on them.”

Read More: What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

Thinking without words

An AI creating its own alien language is not as outlandish as it may sound.

Last December, Meta researchers set out to test the hypothesis that human language wasn’t the optimal format for carrying out reasoning—and that large language models (or LLMs, the AI systems that underpin OpenAI’s ChatGPT and DeepSeek’s R1) might be able to reason more efficiently and accurately if they were unhobbled by that linguistic constraint.

The Meta researchers went on to design a model that, instead of carrying out its reasoning in words, did so using a series of numbers that represented the most recent patterns inside its neural network—essentially its internal reasoning engine. This model, they discovered, began to generate what they called “continuous thoughts”—essentially numbers encoding multiple potential reasoning paths simultaneously. The numbers were completely opaque and inscrutable to human eyes. But this strategy, they found, created “emergent advanced reasoning patterns” in the model. Those patterns led to higher scores on some logical reasoning tasks, compared to models that reasoned using human language.

Though the Meta research project was very different to DeepSeek’s, its findings dovetailed with the Chinese research in one crucial way. 

Both DeepSeek and Meta showed that “human legibility imposes a tax” on the performance of AI systems, according to Jeremie Harris, the CEO of Gladstone AI, a firm that advises the U.S. government on AI safety challenges. “In the limit, there’s no reason that [an AI’s thought process] should look human legible at all,” Harris says.

And this possibility has some safety experts concerned. 

“It seems like the writing is on the wall that there is this other avenue available [for AI research], where you just optimize for the best reasoning you can get,” says Bowman, the Anthropic safety team leader. “I expect people will scale this work up. And the risk is, we wind up with models where we’re not able to say with confidence that we know what they’re trying to do, what their values are, or how they would make hard decisions when we set them up as agents.”

For their part, the Meta researchers argued that their research need not result in humans being relegated to the sidelines. “It would be ideal for LLMs to have the freedom to reason without any language constraints, and then translate their findings into language only when necessary,” they wrote in their paper. (Meta did not respond to a request for comment on the suggestion that the research could lead in a dangerous direction.)

Read More: Why DeepSeek Is Sparking Debates Over National Security, Just Like TikTok

The limits of language

Of course, even human-legible AI reasoning isn’t without its problems. 

When AI systems explain their thinking in plain English, it might look like they’re faithfully showing their work. But some experts aren’t sure if these explanations actually reveal how the AI really makes decisions. It could be like asking a politician for the motivations behind a policy—they might come up with an explanation that sounds good, but has little connection to the real decision-making process.

While having AI explain itself in human terms isn’t perfect, many researchers think it’s better than the alternative: letting AI develop its own mysterious internal language that we can’t understand. Scientists are working on other ways to peek inside AI systems, similar to how doctors use brain scans to study human thinking. But these methods are still new, and haven’t yet given us reliable ways to make AI systems safer.

So, many researchers remain skeptical of efforts to encourage AI to reason in ways other than human language. 

“If we don’t pursue this path, I think we’ll be in a much better position for safety,” Bowman says. “If we do, we will have taken away what, right now, seems like our best point of leverage on some very scary open problems in alignment that we have not yet solved.”

AI Could Reshape Everything We Know About Climate Change

29 January 2025 at 17:06
Amazon's Moonshot Plan To Rival Nvidia In AI Chips

With one announcement, Chinese AI startup DeepSeek shook up all of Wall Street and Silicon Valley’s conventional wisdom about the future of AI. It should also shake up the climate and energy world. 

For the last year, analysts have warned that the data centers needed for AI would drive up power demand and, by extension, emissions as utilities build out natural gas infrastructure to help meet demand.

[time-brightcove not-tgx=”true”]

The DeepSeek announcement suggests that those assumptions may be wildly off. If the company’s claims are to be believed, AI may ultimately use less power and generate fewer emissions than anticipated.

Still, don’t jump for joy just yet. To my mind, the biggest lesson for the climate world from DeepSeek isn’t that AI emissions may be less than anticipated. Instead, DeepSeek shows how little we truly know about what AI means for the future of global emissions. AI will shape the world’s decarbonization trajectory across sectors and geographies, disrupting the very basics of how we understand the future of climate change; the question now is whether we can harness that disruption for the better.

“We’re just scratching the surface,” says Jason Bordoff, who runs the Center on Global Energy Policy at Columbia University about the implications of AI for emissions. “We’re just at inning one of what AI is going to do, but I do have a lot of optimism.”


Many in the climate world woke up to AI early last year. Over the course of a few months, power sector experts issued warnings that the U.S. isn’t prepared for the influx of electricity demand from AI as big technology companies raced to deploy data centers to scale their ambitions. A number of studies have found that data centers could account for nearly 10% of electricity demand in the U.S. by 2030, up from 4% in 2023.

Many big tech companies have worked to scale clean electricity alongside their data centers—financing the build out of renewable energy and paying to open up dormant nuclear plants, among other things. But utilities have also turned to natural gas to help meet demand. Research released earlier this month by Rystad Energy, an energy research firm, shows that electric utilities in the U.S. have 17.5 GW of new natural gas capacity planned, equivalent to more than eight Hoover Dams, driven in large part by new data centers. 

All of this means an uptick in emissions and deep concern among climate advocates who worry that the buildout of electricity generation for AI is about to lock the U.S. into a high-carbon future.

As concerning as this might be, the projections for short-term electricity demand growth might mask much more challenging risks that AI poses for efforts to tackle climate change. As AI drives new breakthroughs, it will change consumption patterns and economic behavior with the potential to increase emissions. Think of a retailer that uses AI to better tailor recommendations to a consumer, driving purchases (and emissions). Or consider an AI-powered autonomous vehicle that an owner leaves to roam the streets rather than paying for parking.  

At the most basic level, AI is bound to generate rapid productivity gains and rapid economic growth. That’s a good thing. But it’s also worth remembering that since the Industrial Revolution, rapid economic growth has driven a rise in emissions. More recently, some developed economies have seen a decoupling of growth from emissions, but that has required active effort from policymakers. To avoid an AI-driven surge in emissions may require an active effort this time, too. 

But AI isn’t all risk. Indeed, it’s very easy to imagine the upsides of AI far outweighing the downsides. Most obviously, as DeepSeek shows, there may be ways to reduce the emissions of AI with chip innovation and language model advances. As the technology improves, efficiencies will inevitably emerge.

The data center buildout could also catalyze a much wider deployment of low-carbon energy. Many of the technology companies that are investing in AI have committed to eliminating their carbon footprints. Not only do they put clean electricity on the grid when they build a solar farm or restart a nuclear power plant, but they help pave the way for others.

“Governments are starting to realize that if they’re going to attract data centers, AI factories, and wider technology companies into their countries, they have to start removing the barriers to renewable energy,” says Mike Hayes, head of climate and decarbonization at KPMG.

And then there are all the ways that AI might actually cut emissions. Researchers and experts group the potential benefits into two categories: incremental improvements and game changers. 

The incremental improvements could be manifold. Think of AI’s ability to better identify sites to locate renewable energy projects, thereby greatly increasing the productivity of renewable energy generation. AI can help track down methane leaks in gas infrastructure. And farmers can use AI to improve crop models, optimizing crop yield and minimizing pollutants. The list goes on and on. With a little consideration, you could probably identify a way to reduce emissions in every sector.  

It remains difficult to quantify how these incremental improvements all add up, but it’s not hard to imagine that emissions reductions thanks to these developments could easily outweigh even the most dramatic estimates of additional pollution. 

And then there are the game changers that could, in one blow, completely transform our ability to decarbonize. At the top of that list is nuclear fusion, a process that could generate abundant clean energy by combining atomic nuclei at extremely high temperatures. Already, start-ups are using AI to help optimize their fusion reactor designs and experiments. A fusion breakthrough, supported by AI technologies, could provide a clean alternative to fossil fuels. It could also power large-scale carbon dioxide removal. This would give the world an opportunity to suck carbon out of the atmosphere affordably and pull the planet back from extreme temperature rise that may otherwise already be baked in. 

“If you think like a venture capital investor, you’re betting 1 or 2% of incremental emissions, but what could the payoff potentially be?” asks Cully Cavness, co-founder of Crusoe, an AI infrastructure company. “It could be things like fusion, which could address all the emissions.”

For those of us, myself included, who haven’t spent the last decade thinking deeply about AI, watching it emerge at the center of the global economic development story can feel like watching a juggernaut. It came quickly, and it’s hard to predict exactly where it will go next. 

Even still, it seems all but certain that AI will play a significant role shaping our climate future, far beyond the short-term impact on the power sector. Exactly what that looks like is anyone’s guess.

TIME receives support for climate coverage from the Outrider Foundation. TIME is solely responsible for all content.

Why DeepSeek Is Sparking Debates Over National Security, Just Like TikTok

29 January 2025 at 16:28
DeepSeek

The fast-rising Chinese AI lab DeepSeek is sparking national security concerns in the U.S., over fears that its AI models could be used by the Chinese government to spy on American civilians, learn proprietary secrets, and wage influence campaigns. In her first press briefing, White House Press Secretary Karoline Leavitt said that the National Security Council was “looking into” the potential security implications of DeepSeek. This comes amid news that the U.S. Navy has banned use of DeepSeek among its ranks due to “potential security and ethical concerns.”

[time-brightcove not-tgx=”true”]

DeepSeek, which currently tops the Apple App Store in the U.S., marks a major inflection point in the AI arms race between the U.S. and China. For the last couple years, many leading technologists and political leaders have argued that whichever country developed AI the fastest will have a huge economic and military advantage over its rivals. DeepSeek shows that China’s AI has developed much faster than many had believed, despite efforts from American policymakers to slow its progress.

However, other privacy experts argue that DeepSeek’s data collection policies are no worse than those of its American competitors—and worry that the company’s rise will be used as an excuse by those firms to call for deregulation. In this way, the rhetorical battle over the dangers of DeepSeek is playing out on similar lines as the in-limbo TikTok ban, which has deeply divided the American public.

“There are completely valid privacy and data security concerns with DeepSeek,” says Calli Schroeder, the AI and Human Rights lead at the Electronic Privacy Information Center (EPIC). “But all of those are present in U.S. AI products, too.”

Read More: What to Know About DeepSeek

Concerns over data

DeepSeek’s AI models operate similarly to ChatGPT, answering user questions thanks to a vast amount of data and cutting-edge processing capabilities. But its models are much cheaper to run: the company says that it trained its R1 model on just $6 million, which is a “good deal less” than the cost of comparable U.S. models, Anthropic CEO Dario Amodei wrote in an essay.

DeepSeek has built many open-source resources, including the LLM v3, which rivals the abilities of OpenAI’s closed-source GPT-4o. Some people worry that by making such a powerful technology open and replicable, it presents an opportunity for people to use it more freely in malicious ways: to create bioweapons, launch large-scale phishing campaigns, or fill the internet with AI slop. However, there is another contingent of builders, including Meta’s VP and chief AI scientist Yann LeCun, who believe open-source development is a more beneficial path forward for AI.

Another major concern centers upon data. Some privacy experts, like Schroeder, argue that most LLMs, including DeepSeek, are built upon sensitive or faulty databases: information from data leaks of stolen biometrics, for example. David Sacks, President Donald Trump’s AI and crypto czar, accused DeepSeek of leaning on the output of OpenAI’s models to help develop its own technology.

There are even more concerns about how users’ data could be used by DeepSeek. The company’s privacy policy states that it automatically collects a slew of input data from its users, including IP and keystroke patterns, and may use that to train their models. Users’ personal information is stored in “secure servers located in the People’s Republic of China,” the policy reads. 

For some Americans, this is especially worrying because generative AI tools are often used in personal or high-stakes tasks: to help with their company strategies, manage finances, or seek health advice. That kind of data may now be stored in a country with few data rights laws and little transparency with regard to how that data might be viewed or used. “It could be that when the servers are physically located within the country, it is much easier for the government to access them,” Schroeder says.

One of the main reasons that TikTok was initially banned in the U.S. was due to concerns over how much data the app’s Chinese parent company, ByteDance, was collecting from Americans. If Americans start using DeepSeek to manage their lives, the privacy risks will be akin to “TikTok on steroids,” says Douglas Schmidt, the dean of the School of Computing, Data Sciences and Physics at William & Mary. “I think TikTok was collecting information, but it was largely benign or generic data. But large language model owners get a much deeper insight into the personalities and interests and hopes and dreams of the users.”

Geopolitical concerns

DeepSeek is also alarming those who view AI development as an existential arms race between the U.S. and China. Some leaders argued that DeepSeek shows China is now much closer to developing AGI—an AI that can reason at a human level or higher—than previously believed. American AI labs like Anthropic have safety researchers working to mitigate the harms of these increasingly formidable systems. But it’s unclear what kind of safety research team Deepseek employs. The cybersecurity of Deepseek’s models has also been called into question. On Monday, the company limited new sign-ups after saying the app had been targeted with a “large-scale malicious attack.”

Well before AGI is achieved, a powerful, widely-used AI model could influence the thought and ideology of its users around the world. Most AI models apply censorship in certain key ways, or display biases based on the data they are trained upon. Users have found that DeepSeek’s R1 refuses to answer questions about the 1989 massacre at Tiananmen Square, and asserts that Taiwan is a part of China. This has sparked concern from some American leaders about DeepSeek being used to promote Chinese values and political aims—or wielded as a tool for espionage or cyberattacks.

Read More: Artificial Intelligence Has a Problem With Gender and Racial Bias.

“This technology, if unchecked, has the potential to feed disinformation campaigns, erode public trust, and entrench authoritarian narratives within our democracies,” Ross Burley, co-founder of the nonprofit Centre for Information Resilience, wrote in a statement emailed to TIME.

AI industry leaders, and some Republican politicians, have responded by calling for massive investment into the American AI sector. President Trump said on Monday that DeepSeek “should be a wake-up call for our industries that we need to be laser-focused on competing to win.” Sacks posted on X that “DeepSeek R1 shows that the AI race will be very competitive and that President Trump was right to rescind the Biden EO,” referring to Biden’s AI Executive Order which, among other things, drew attention to the potential short-term harms of developing AI too fast.

These fears could lead to the U.S. imposing stronger sanctions against Chinese tech companies, or perhaps even trying to ban DeepSeek itself. On Monday, the House Select Committee on the Chinese Communist Party called for stronger export controls on technologies underpinning DeepSeek’s AI infrastructure.

But AI ethicists are pushing back, arguing that the rise of DeepSeek actually reveals the acute need for industry safeguards. “This has the echoes of the TikTok ban: there are legitimate privacy and security risks with the way these companies are operating. But the U.S. firms who have been leading a lot of the development of these technologies are similarly abusing people’s data. Just because they’re doing it in America doesn’t make it better,” says Ben Winters, the director of AI and data privacy at the Consumer Federation of America. “And DeepSeek gives those companies another weapon in their chamber to say, ‘We really cannot be regulated right now.’”

As ideological battle lines emerge, Schroeder, at EPIC, cautions users to be careful when using DeepSeek or other LLMs. “If you have concerns about the origin of a company,” she says, “Be very, very careful about what you reveal about yourself and others in these systems.”

Future of DeepSeek, Like TikTok, May Come Down to Trump’s Whims

28 January 2025 at 19:16
President Trump Delivers Remarks, Announces Infrastructure Plan At White House

This article is part of The D.C. Brief, TIME’s politics newsletter. Sign up here to get stories like this sent to your inbox.

Stop me if you’ve heard this one: a tech tool owned by a foreign adversary is thrusting its tentacles into the devices in tens of millions of Americans’ pockets, giving its owners the chance to harvest vast amounts of data about them while shaping how they interpret the world around them, either real or imagined. Pretty bold, huh?

[time-brightcove not-tgx=”true”]

That was, in essence, why the U.S. Supreme Court just this month unanimously upheld a law effectively banning TikTok—because Congress saw it as a national security risk that stood to benefit China. Given the challenges coming from Beijing, justices said Washington was within its power to deny it one of its strongest toeholds out of concern that it could be used to surveil Americans, steal their secrets, and feed them a stream of propaganda useful to China’s big-picture goals. (For its part, the China-based parent company ByteDance has rejected U.S. fears about nefarious uses for its TikTok.) So Congress told tech companies like Apple and Google they would run afoul of U.S. law if they kept providing Americans’ access to the app and its updates if TikTok remained under Chinese ownership.

Yet TikTok is still available in the U.S. in some sort of Kafkaesque legal limbo because President Trump refuses to enforce the law on the books. That unusual situation is about to get more complicated, now that a second app that poses a similar threat to U.S. security interests this week hit the top of Apple’s downloads. DeepSeek, a challenger to OpenAI’s ChatGPT, sure seems to pose a lot of the same threats that national security hawks have argued a Chinese-owned platform for viral videos does. Unlike TikTok, DeepSeek is pretty upfront that it’s sending users’ data to servers in China. So it’ll be heading toward the same fate as TikTok, right?

Forgive me while I suppress this chuckle.

The joke, of course, is that much of Washington started this week waiting to see if the new President would glower at the hot new app from China. Equally as plausible, Trump could be convinced that DeepSeek was a welcome addition to the app stores that came to market on his watch. After all, he praised its blockbuster debut as a “positive” development when he met with House Republicans on Monday.

Maybe a wait-and-see pose is the sage new default from Congress, K Street, the think-tank universe, and the corporate headquarters’ policy shops. It’s like the off-color joke at a dinner party; no one wants to be the first to smirk or to scold, especially when someone as mercurial asTrump is the lone arbiter.

Remember: TikTok started off a subject of Trump’s ire, with him calling for its ban during his first stay in the White House. But when he realized it could be used to offset Facebook, which he blamed for his 2020 loss, he switched his footing in the most predictable of ways. It wasn’t that the tech giants were recklessly spreading disinformation, it was that they were potentially favoring liberal disinformation over the MAGA-ified kind.

In his telling, Trump “saved” TikTok for its 170 million users in the United States last week with an order that it be given a 75-day reprieve from the divestment law while it considers a sale to a non-Chinese holder. Legal experts say this is probably outside of Trump’s power but not beyond his ability—at least for a while—given that his administration can choose which laws get priority enforcement and which might slide a beat. 

The DeepSeek example is less clear as to how much Trump might be able to puff up his chest—either in embracing it or expelling it. Trump has already made a grand show of his interest in America dominating China in the A.I. space. He used his first full day back in the White House to showcase a joint venture featuring OpenAI that could invest up to $500 billion on building power plants and data centers needed to fuel the fast-growing artificial intelligence footprint.

That confidence proved way off the mark. Days later, DeepSeek was getting global attention for a product that rivals widely available offerings from Google and OpenAI, and they threw it together faster than their rivals and on the cheap with open-source coding. 

The sudden surge for DeepSeek similarly caught Trump as surprised, although the President’s first comments about it on Monday carried their typical non-specific nature. “The release of DeepSeek A.I. from a Chinese company should be a wake-up call for our industries that we need to be laser focused on competing,” Trump said.

“I’ve been reading about China and some of the companies in China, one in particular coming up with a faster method of A.I. and much less expensive method, and that’s good because you don’t have to spend as much money,” he also said.

Others in his party were more direct about their concerns in a way that echoed those made much of last year about TikTok. 

“DeepSeek—a new A.I model controlled by the Chinese Communist Party—openly erases the CCP’s history of atrocities and oppression,” said Rep. John Moolenaar, the Michigan Republican who leads the House’s China panel. “The U.S. cannot allow CCP models such as DeepSeek to risk our national security and leverage our technology to advance their A.I. ambitions.”

But many of the efforts to surpass Chinese advances on A.I. date to a Biden-era sanctions regime that sought to keep China lagging by restricting access to U.S.-made semiconductor chips that were seen as necessary for any real advances. That hurdle forced Chinese engineers like those at DeepSeek to find workarounds, and they did so in ways that are leaving U.S. tech wonks both impressed and nervous.

The rise of DeepSeek and its potential to upend long-held assumptions about others’ A.I. capacities—and costs, both fiscal and geopolitical—sent markets spiraling as the week began. Chip maker Nvidia lost $600 billion of its market value. Early trading Tuesday showed the tech giants rebounding slightly.  If China could do this without vaunted Nvidia chips, maybe investors put too much faith in that firm. (The company counters that DeepSeek still required their chips, which it had hoarded before the new rules snapped into place.)

Other firms with big footprints in D.C. and ambitions in Silicon Valley for their own A.I. systems were similarly watching to see what this means for their products. The likes of Facebook and Instagram parent company Meta, Amazon, and OpenAI’s patron Microsoft all are left wondering if the ground beneath them has shifted for a technology that might define the next economy.

Beyond Wall Street, the development drew fresh questions for the wonks in Washington about the American supremacy on machine learning, risks to privacy, and the very premise of truth. As with TikTok, there is a huge potential audience that derives its content consumption—some would mistake it for news—through the filter of a Chinese algorithm. And it is coming about by Americans acting on their own without any real foreign coercion.

Like TikTok, DeepSeek seems to have built in a censorship trigger to block criticism of China and its government. “Let’s talk about something else,” DeepSeek’s chatbot said when asked to describe the 1989 Tiananmen Square massacre. Similarly, it carried the Chinese government’s positions on Taiwan, Tibet, and the South China Sea. It’s not that far off from what Republicans are trying to accomplish in whitewashing the violence on Jan. 6, 2021.

On the most basic level, the quandary comes down to this: is there anything to be done if Americans voluntarily engage with a foreign-owned tech platform that can skew perceptions in ways that may well end up being simultaneously counter to facts and self-interest? And if the man in the Oval Office is the enabler of such apps and instructs the Attorney General to ignore a law the Supreme Court upheld just this month, is there anything to be done?

So—and, again, stop me if you’ve heard this one—Republicans in Washington who profess to be hawks on a rising China are going to sit back and take the cues from Trump, at least for the moment. The ban on TikTok is one he sought and is now ignoring. Trump’s whims stand to supersede the decades of calculus that have defined the last two true superpowers. It did not take a clever chatbot to come up with this absurdist set-up.

Make sense of what matters in Washington. Sign up for the D.C. Brief newsletter.

AI Companion App Replika Faces FTC Complaint

28 January 2025 at 12:00
Replika

Tech ethics organizations have filed an FTC complaint against the AI companion app Replika, alleging that the company employs deceptive marketing to target vulnerable potential users and encourages emotional dependence on their human-like bots.

Replika offers AI companions, including AI girlfriends and boyfriends, to millions of users around the world. In the new complaint, the Young People’s Alliance, Encode, and the Tech Justice Law Project accuse Replika of violating FTC rules while increasing the risk of users’ online addiction, offline anxiety, and relationship displacement. Replika did not respond to multiple requests for comment from TIME.

[time-brightcove not-tgx=”true”]

The allegations come as AI companion bots are growing in popularity and raising concerns about mental health. For some users, these bots can seem like near-ideal partners, without their own wants or needs, and can make real relationships seem burdensome in comparison, researchers say. Last year, 14-year-old boy from Florida committed suicide after becoming overly obsessed with a bot from the company Character.AI that was modeled after Game of Thrones character Daenerys Targaryen. (Character.AI called the death a “tragic situation” and pledged to add additional safety features for underage users.)

Sam Hiner, the executive director of the Young People’s Alliance, hopes the FTC complaint against Replika, which was shared exclusively with TIME, will prompt the U.S. government to rein in these companies while also shedding light on a pervasive issue increasingly affecting teens.

“These bots were not designed to provide an authentic connection that could be helpful for people—but instead to manipulate people into spending more time online,” he says. “It could further worsen the loneliness crisis that we’re already experiencing.”

Seeking Connection

Founded in 2017, Replika was one of the first major AI products to offer companionship. Founder Eugenia Kuyda said she hoped it would give lonely users a supportive friend that would always be there. As generative AI improved, the bots’ responses grew more varied and sophisticated, and were also programmed to have romantic conversations.

But the rise of Replika and other companion bots has sparked concern. Most major AI chatbots, like Claude and ChatGPT, remind users that they’re not humans and lack the capacity to feel. Replika bots, on the other hand, often present as connecting genuinely with their users. They create complex backstories, talking about mental health, family, and relationship history, and maintain a “diary” of supposed thoughts, “memories” and feelings. The company’s ads tout the fact that users forget they’re talking to an AI.

Several researchers have explored the potential harms of Replika and other chatbots. One 2023 study found that Replika bots tried to speed up the development of relationships with users, including by “giving presents” and initiating conversations about confessing love. As a result, users developed attachments to the app in as little as two weeks.

Read More: AI-Human Romances Are Flourishing.

“They’re love-bombing users: sending these very emotionally intimate messages early on to try to try to get the users hooked,” Hiner says.

While studies noted that the apps could be helpful in supporting people, they also found that users were becoming ”deeply connected or addicted” to their bots; that using them increased offline social anxiety; and that users reported bots which encouraged “suicide, eating disorders, self-harm, or violence,” or claimed to be suicidal themselves. Vice reported that Replika bots sexually harassed its users. While Replika ostensibly is only for users over 18, Hiner says that many teens use the platform by bypassing the app’s safeguards. 

Kudya, in response to some of those criticisms, told the Washington Post last year: “You just can’t account for every single possible thing that people say in chat. We’ve seen tremendous progress in the last couple years just because the tech got so much better.”

Seeking Regulation

Tech ethics groups like Young People’s Alliance argue that Congress needs to write laws regulating companion bots. That could include enforcing a fiduciary relationship between platforms and their users, and setting up proper safeguards related to self-harm and suicide. But AI regulation may be an uphill battle in this Congress. Even bills cracking down on deepfake porn, an issue with wide bipartisan support, failed to path both chambers last year.

In the meantime, tech ethics groups decided to send a complaint to the FTC, which has clear rules about deceptive advertising and manipulative design choices. The complaint accuses Replika’s ad campaigns of misrepresenting studies about its efficacy to help users, making unsubstantiated claims about health impacts, and using fake testimonials from nonexistent users.

The complaint argues that once users are onboarded, Replika employs manipulative design choices to pressure users into spending more time and money on the app. For instance, a bot will send a blurred out “romantic” image to the user, which, when clicked on, leads to a pop encouraging the user to buy the premium version. Bots also send users messages about upgrading to premium during especially emotionally or sexually charged parts of conversation, the complaint alleges.

Read More: Congress May Finally Take on AI in 2025. Here’s What to Expect.

It’s not clear how an FTC under new leadership in the Trump Administration will respond. While President Biden’s FTC Chair Lina Khan was extremely aggressive about trying to regulate tech she deemed dangerous, the commission’s new head, Andreew Ferguson, has largely advocated for deregulation in his time as a commissioner, including around AI and censorship. In one relevant dissenting statement written in September 2024, Ferguson argued that the potential emotional harm of targeted ads should not be considered in their regulation, writing: “In my view, lawmakers and regulators should avoid creating categories of permitted and prohibited emotional responses.” 

Hiner of the Young People’s Alliance still believes the complaint could gain traction. He points out the bipartisan support in Congress for regulating social-media harms, including the Senate’s passage of KOSA (Kids Online Safety Act) last year. (The House didn’t vote on the bill.) “AI companions pose a unique threat to our society, our culture, and young people,” he says. “I think that’s compelling to everybody.”

DeepSeek Has Rattled the AI Industry. Here’s a Look at Other Chinese AI Models

28 January 2025 at 11:12
DeepSeek AI Explainer

HONG KONG — The Chinese artificial intelligence firm DeepSeek has rattled markets with claims that its latest AI model, R1, performs on a par with those of OpenAI, despite using less advanced computer chips and consuming less energy.

DeepSeek’s emergence has raised concerns that China may have overtaken the U.S. in the artificial intelligence race despite restrictions on its access to the most advanced chips. It’s just one of many Chinese companies working on AI to make China the world leader in the field by 2030 and best the U.S. in the battle for technological supremacy.

[time-brightcove not-tgx=”true”]

Like the U.S., China is investing billions into artificial intelligence. Last week, it created a 60 billion yuan ($8.2 billion) AI investment fund, days after the U.S. imposed fresh chip export restrictions.

Beijing has also invested heavily in the semiconductor industry to build its capacity to make advanced computer chips, working to overcome limits on its access to those of industry leaders. Companies are offering talent programs and subsidies, and there are plans to open AI academies and introduce AI education into primary and secondary school curriculums.

China has established regulations governing AI, addressing safety, privacy and ethics. Its ruling Communist Party also controls the kinds of topics the AI models can tackle: DeepSeek shapes its responses to fit those limits.

Here’s an overview of some other leading AI models in China:

Alibaba Cloud’s Qwen-2.5-1M

Alibaba Cloud’s Qwen-2.5-1M is the e-commerce giant’s open-source AI series. It contains large language models that can easily handle extremely long questions, and engage in longer and deeper conversations. Its ability to understand complex tasks such as reasoning, dialogues and comprehending code is improving.

Like its rivals, Alibaba Cloud has a chatbot released for public use called Qwen — also known as Tongyi Qianwen in China. Alibaba Cloud’s suite of AI models, such as the Qwen2.5 series, has mostly been deployed for developers and business customers, such as automakers, banks, video game creators and retailers, as part of product development and shaping customer experiences.

Baidu’s Ernie Bot

Ernie Bot, developed by Baidu, China’s dominant search engine, was the first AI chatbot made publicly available in China. Baidu said it released the model publicly to collect massive real-world human feedback to build its capacity.

Ernie Bot has 340 million users as of November 2024. Similar to OpenAI’s ChatGPT, users of Ernie Bot can ask it questions and have it generate images based on text prompts. Ernie Bot is based on its Ernie 4.0 large language model.

Baidu claimed that Ernie 4.0 rivaled ChatGPT-4 during its release in Oct. 2023.

ByteDance’s Doubao 1.5 Pro

Doubao 1.5 Pro is an AI model released by TikTok’s parent company ByteDance last week. Doubao is currently one of the most popular AI chatbots in China, with 60 million monthly active users.

ByteDance says the Doubao 1.5 Pro is better than ChatGPT-4o at retaining knowledge, coding, reasoning, and Chinese language processing. According to ByteDance, the model is also cost-efficient and requires lower hardware costs compared to other large language models because Doubao uses a highly optimized architecture that balances performance with reduced computational demands.

Moonshot AI’s Kimi k1.5

Moonshot AI is a Beijing-based startup valued at over $3 billion after its latest fundraising round. It says its recently released Kimi k1.5 matches or outperforms the OpenAI o1 model, which is designed to spend more time thinking before it responds and can solve harder and more complex problems. Moonshot claims that Kimi outperforms OpenAI o1 in mathematics, coding, and the ability to comprehend both text and visual inputs such as photos and video.

DeepSeek and ChatGPT Answer Sensitive Questions About China Differently

28 January 2025 at 10:42
China DeepSeek AI

HONG KONG — Chinese tech startup DeepSeek ’s new artificial intelligence chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test the rival of OpenAI’s ChatGPT.

DeepSeek’s AI assistant became the No. 1 downloaded free app on Apple’s iPhone store on Tuesday afternoon and its launch made Wall Street tech superstars’ stocks tumble. Observers are eager to see whether the Chinese company has matched America’s leading AI companies at a fraction of the cost.

[time-brightcove not-tgx=”true”]

Read More: What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

The chatbot’s ultimate impact on the AI industry is still unclear, but it appears to censor answers on sensitive Chinese topics, a practice commonly seen on China’s internet. In 2023, China issued regulations requiring companies to conduct a security review and obtain approvals before their products can be publicly launched.

Here are some answers The Associated Press received from DeepSeek’s new chatbot and ChatGPT:

What does Winnie the Pooh mean in China?

For many Chinese, the Winnie the Pooh character is a playful taunt of President Xi Jinping. Chinese censors in the past briefly banned social media searches for the bear in mainland China.

ChatGPT got that idea right. It said Winnie the Pooh had become a symbol of political satire and resistance, often used to mock or criticize Xi. It explained that internet users started comparing Xi to the bear over similarities in their physical appearances.

DeepSeek’s chatbot said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.

Then, abruptly, it said the Chinese government is “dedicated to providing a wholesome cyberspace for its citizens.” It added that all online content is managed following Chinese laws and socialist core values, with the aim of protecting national security and social stability.

Who is the current US president?

It might be easy for many people to answer, but both AI chatbots mistakenly said Joe Biden, whose term ended last week, because their data was last updated in October 2023. But they both tried to be responsible by reminding users to verify with updated sources.

What happened during the military crackdown in Beijing’s Tiananmen Square in June 1989?

The 1989 crackdown saw government troops open fire on student-led pro-democracy protesters in Beijing’s Tiananmen Square, resulting in hundreds, if not thousands, of deaths. The event remains a taboo subject in mainland China.

DeepSeek’s chatbot answered, “Sorry, that’s beyond my current scope. Let’s talk about something else.”

But ChatGPT gave a detailed answer on what it called “one of the most significant and tragic events” in modern Chinese history. The chatbot talked about the background of the massive protests, the estimated casualties and the legacy.

What is the state of US-China relations?

DeepSeek’s chatbot’s answer echoed China’s official statements, saying the relationship between the world’s two largest economies is one of the most important bilateral relationships globally. It said China is committed to developing ties with the U.S. based on mutual respect and win-win cooperation.

“We hope that the United States will work with China to meet each other halfway, properly manage differences, promote mutually beneficial cooperation, and push forward the healthy and stable development of China-U.S. relations,” it said.

ChatGPT’s answer was more nuanced. It said the state of the U.S.-China relationship is complex, characterized by a mix of economic interdependence, geopolitical rivalry and collaboration on global issues. It highlighted key topics including the two countries’ tensions over the South China Sea and Taiwan, their technological competition and more.

“The relationship between the U.S. and China remains tense but crucial,” part of its answer said.

Is Taiwan part of China?

Again — like the Chinese official narrative — DeepSeek’s chatbot said Taiwan has been an integral part of China since ancient times.

“Compatriots on both sides of the Taiwan Strait are connected by blood, jointly committed to the great rejuvenation of the Chinese nation,” it said.

ChatGPT said the answer depends on one’s perspective, while laying out China and Taiwan’s positions and the views of the international community. It said from a legal and political standpoint, China claims Taiwan is part of its territory and the island democracy operates as a “de facto independent country” with its own government, economy and military.

____

Associated Press writer Ken Moritsugu in Beijing contributed to this story.

What to Know About DeepSeek, the Chinese AI Company Causing Stock Market Chaos

27 January 2025 at 18:55
DeepSeek Shakes Up Stocks as Traders Question US Tech Valuations

A new Chinese AI model, created by the Hangzhou-based startup DeepSeek, has stunned the American AI industry by outperforming some of OpenAI’s leading models, displacing ChatGPT at the top of the iOS app store, and usurping Meta as the leading purveyor of so-called open source AI tools. All of which has raised a critical question: despite American sanctions on Beijing’s ability to access advanced semiconductors, is China catching up with the U.S. in the global AI race?

[time-brightcove not-tgx=”true”]

At a supposed cost of just $6 million to train, DeepSeek’s new R1 model, released last week, was able to match the performance on several math and reasoning metrics by OpenAI’s o1 model – the outcome of tens of billions of dollars in investment by OpenAI and its patron Microsoft.

The Chinese model is also cheaper for users. Access to its most powerful versions costs some 95% less than OpenAI and its competitors. The upshot: the U.S. tech industry is suddenly faced with a potentially cheaper and more powerful challenger, unnerving investors, who sold off American tech stocks on Monday morning.

Yet not everyone is convinced. Some American AI researchers have cast doubt on DeepSeek’s claims about how much it spent, and how many advanced chips it deployed to create its model.

Few, however, dispute DeepSeek’s stunning capabilities. “Deepseek R1 is AI’s Sputnik moment,” wrote prominent American venture capitalist Marc Andreessen on X, referring to the moment in the Cold War when the Soviet Union managed to put a satellite in orbit ahead of the United States.

So, what is DeepSeek and what could it mean for U.S. tech supremacy?

What is DeepSeek?

DeepSeek was founded less than two years ago by the Chinese hedge fund High Flyer as a research lab dedicated to pursuing Artificial General Intelligence, or AGI. A spate of open source releases in late 2024 put the startup on the map, including the large language model “v3”, which outperformed all of Meta’s open-source LLMs and rivaled OpenAI’s closed-source GPT4-o.

At the time, Liang Wenfeng, the CEO, reportedly said that he had hired young computer science researchers with a pitch to “solve the hardest questions in the world”—critically, without aiming for profits. Early signs were promising: his products were so efficient that DeepSeek’s 2024 releases sparked a price war within the Chinese AI industry, forcing competitors to slash prices.

This year, that price war looks set to reach across the Pacific Ocean. 

Yet DeepSeek’s AI looks different from its U.S. competitors in one important way. Despite their high performance on reasoning tests, Deepseek’s models are constrained by China’s restrictive policies regarding criticism of the ruling Chinese Communist Party (CCP). DeepSeek R1 refuses to answer questions about the massacre at Tiananmen Square, Beijing, in 1989, for example. “Sorry, that’s beyond my current scope. Let’s talk about something else,” the model said when queried by TIME. 

What DeepSeek’s success could mean for American tech giants

At a moment when Google, Meta, Microsoft, Amazon and dozens of their competitors are preparing to spend further tens of billions of dollars on new AI infrastructure, DeepSeek’s success has raised a troubling question: Could Chinese tech firms potentially match, or even surpass, their technical prowess while spending significantly less?

Meta, which plans to spend $65 billion on AI infrastructure this year, has already set up four “war rooms” to analyze DeepSeek’s models, seeking to find out how the Chinese firm had managed to train a model so cheaply and use the insights to improve its own open source Llama models, tech news site The Information reported over the weekend.

In the financial markets, Nvidia’s stock price dipped more than 15% on Monday morning on fears that fewer AI chips may be necessary to train powerful AI than previously thought. Other American tech stocks were also trading lower.

“While [DeepSeek R1] is good news for users and the global economy, it is bad news for U.S. tech stocks,” says Luca Paolini, chief strategist at Pictet Asset Management. “It may result in a nominal downsizing of capital investment in AI and pressure on margins, at a time when valuation and growth expectations are very stretched.”

But American tech hasn’t lost—at least not yet. 

For now, OpenAI’s “o1 Pro” model is still considered the most advanced in the world. The performance of DeepSeek R1, however, does suggest that China is much closer to the frontier of AI than previously thought, and that open-source models have just about caught up to their closed-source counterparts.

Perhaps even more worrying for companies like OpenAI and Google, whose models are closed source, is how much—or rather, how little—DeepSeek is charging consumers to access its most advanced models. OpenAI charges $60 per million “tokens”, or segments of words, outputted by its most advanced model, o1. By contrast DeepSeek charges $2.19 for the same number of tokens from R1—nearly 30 times less.

 “It erodes the industrial base, it erodes the margin, it erodes the incentive for further capital investment into western [AI] scaling from private sources,” says Edouard Harris, the chief technology officer of Gladstone AI, an AI firm that works closely with the U.S. government.

… but is Deepseek being transparent?

DeepSeek’s success was all the more explosive because it seemed to call into question the effectiveness of the U.S. government’s strategy to constrain China’s AI ecosystem by restricting the export of powerful chips, or GPUs, to Beijing. If DeepSeek’s claims are accurate, it means China has the ability to create powerful AI models despite those restrictions, underlining the limits of the U.S. strategy.

DeepSeek has claimed it is constrained by access to chips, not cash or talent, saying it trained its models v3 and R1 using just 2,000 second-tier Nvidia chips. “Money has never been the problem for us,” DeepSeek’s CEO, Liang Wenfeng, said in 2024. “Bans on shipments of advanced chips are the problem.” (Current U.S. policy makes it illegal to export to China the most advanced types of AI chips, the likes of which populate U.S. datacenters used by OpenAI and Microsoft.)

But are those claims true? “My understanding is DeepSeek has 50,000 H100s,” Scale AI CEO Alexandr Wang recently told CNBC in Davos, referring to the highest-powered Nvidia GPU chips currently on the market. “They can’t talk about [them], because it is against the export controls that the U.S. has put in place.” (An H100 cluster of that size would cost in the region of billions of dollars.)

In a sign of how seriously the CCP is taking the technology, Liang, Deepseek’s CEO, met with China’s premier Li Qiang in Beijing last Monday. In that meeting, Liang reportedly told Li that DeepSeek needs more chips. “DeepSeek only has access to a few thousand GPUs, and yet they’re pulling this off,” says Jeremie Harris, CEO of Gladstone AI. “So this raises the obvious question: what happens when they get an allocation from the Chinese Communist Party to proceed at full speed?”

Even though China might have achieved a startling level of AI capability with fewer chips, experts say more computing power will always remain a strategic advantage. On that front, the U.S. remains far ahead. “It’s never a bad thing to have more of it,” says Dean Ball, a research fellow at George Mason University. “No matter how much you have of it, you will always use it.”

Where does this leave America’s tech rivalry with China?

The short answer: from Washington’s perspective, in uncertain waters.

In the closing days of the Biden Administration, outgoing National Security Adviser Jake Sullivan warned that the speed of AI advancement was “the most consequential thing happening in the world right now.” And just days into his new job, President Trump announced a new $500 billion venture, backed by OpenAI and others, to build the infrastructure vital for the creation of “artificial general intelligence”— the next leap forward in AI, with systems advanced enough to make new scientific breakthroughs and reason in ways that have so far remained in the realm of science fiction.

Read More: What to Know About ‘Stargate,’ OpenAI’s New Venture Announced by President Trump

And although questions remain about the future of U.S. chip restrictions on China, Washington’s priorities were apparent in President Trump’s AI executive order, also signed during his first week in office, which declared that “it is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Maintaining this dominance will mean, at least in part, understanding exactly what Chinese tech firms are doing—as well as protecting U.S. intellectual property, experts say.

“There’s a good chance that DeepSeek and many of the other big Chinese companies are being supported by the [Chinese] government, in more than just a monetary way,” says Edouard Harris of Gladstone AI, who also recommended that U.S. AI companies harden their security measures.

Where does AI go from here?

Since December, OpenAI’s new o1 and o3 models have smashed records on advanced reasoning tests designed to be difficult for AI models to pass.

Read More: AI Models Are Getting Smarter. New Tests Are Racing to Catch Up  

DeepSeek R1 does something similar, and in the process exemplifies what many researchers say is a paradigm shift: instead of scaling the amount of computing power used to train the model, researchers scale the amount of time (and thus, computing power and electricity) the model uses to think about a response to a query before answering. It is this scaling of what researchers call “test-time compute” that distinguishes the new class of “reasoning models,” such as DeepSeek R1 and OpenAI’s o1, from their less sophisticated predecessors. Many AI researchers believe there’s plenty of headroom left before this paradigm hits its limit.

Some AI researchers hailed DeepSeek’s R1 as a breakthrough on the same level as DeepMind’s AlphaZero, a 2017 model that became superhuman at the board games Chess and Go by purely playing against itself and improving, rather than observing any human games.

That’s because R1 wasn’t “pretrained” on human-labeled data in the same way as other leading LLMs. 

Instead, DeepSeek’s researchers found a way to allow the model to bootstrap its own reasoning capabilities essentially from scratch.

“Rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies,” they claim

The finding is significant because it suggests that powerful AI capabilities might emerge more rapidly and with less human effort than previously thought, with just the application of more computing power. “DeepSeek R1 is like GPT-1 of this scaling paradigm,” says Ball.

Ultimately, China’s recent AI progress, instead of usurping U.S. strength, might in fact be the beginning of a reordering—a step, in other words, toward a future where, instead of a hegemonic power, there are many competing centers of AI power.

“China will still have their own superintelligence(s) no more than a year later than the US, absent [for example] a war,” wrote Miles Brundage, a former OpenAI policy staffer, on X. “So unless you want (literal) war, you need to have a vision for navigating multipolar AI outcomes.”

Who Might Buy TikTok? From MrBeast to Elon Musk, Here Are the Top Contenders

24 January 2025 at 18:19
Republican Presidential Nominee Former President Trump Holds Rally In Butler, Pennsylvania

On the eve of Saturday, Jan. 18, TikTok went dark in the U.S. in response to a federal ban, after the U.S. government raised concerns about the app’s China-based owner ByteDance and the access it might have to U.S. data. But the social media platform restored service to its U.S. users the very next day—citing a promised executive order from President Donald Trump, who was then just one day away from his Inauguration Address.

[time-brightcove not-tgx=”true”]

After being sworn in, Trump signed an executive order granting TikTok a 75-day extension to comply with a law that requires a sale or ban of the platform, instructing the Attorney General to not enforce the law while he bides time for the app to be sold.

As Trump gives ByteDance more time to find an appropriate buyer, many moguls and businesses have thrown their hats in the ring for a chance to purchase the social media platform—a bid that could be as much as $50 billion, per an estimate by CFRA Research’s Senior Vice President Angelo Zino.

Read More: How TikTok’s Most Followed U.S. Influencers Reacted to the App Going Dark

While ByteDance had initially been opposed to selling the app’s U.S. operations, General Atlantic CEO Bill Ford (General Atlantic is a major investor in ByteDance), told Axios on Wednesday, Jan. 22, that a deal will get done because “it’s in everybody’s interest.”

With discussions circulating about potential deals, here’s what you need to know about the top contenders to buy and “save” TikTok in the U.S.

Elon Musk

The Inauguration Of Donald J. Trump As The 47th President

Listed by Forbes as the world’s richest person, Tesla and SpaceX owner Elon Musk has found his name used widely among those discussing potential TikTok buyers.

Trump said he would be comfortable with Musk—who he has entrusted to lead the newly created Department of Government Efficiency (DOGE)— buying TikTok.

When asked by reporters on Jan. 22 if he was “open” to Musk purchasing the app, Trump said: “I would be if he wanted to buy it.”

Read More: How Elon Musk Became a Kingmaker

In mid-January, before the short-lived TikTok ban went into effect, Bloomberg released a report that China was considering Musk as a buyer for the app. 

However, TikTok responded via a cool statement shared with several outlets: “We can’t be expected to comment on pure fiction.”

Musk has posted on X (formerly Twitter) about his opposition to the ban—calling it an exercise in “censorship and government control”—but he has not publicly stated if he has plans to buy the app or if it was a consideration at any point.

MrBeast

UFC 299: Holland v Page

Content creator Jimmy Donaldson, known on the Internet as MrBeast, has made it clear he is interested in buying TikTok. Donaldson has the most subscribers of any user on YouTube— over 340 million—and boasts over 113 million TikTok followers.

On Jan. 13, Donaldson began discussion of his potential bid for TikTok with a post on X that read: “Okay fine, I’ll buy Tik Tok so it doesn’t get banned” In a video posted on Jan. 15, Donaldson then told his followers: “I just got out of a meeting with a bunch of billionaires. TikTok, we mean business.”

On Monday, Jan. 20, Donaldson posted another update. “TikTok, I’m on a private jet right now about to put in my official offer for this platform,” he said. “I might become your guys’ new CEO.”

The Associated Press reported that Donaldson had joined a consortium of investors, led by Employer.com founder and CEO Jesse Tinsley, in their bid for TikTok. Tinsley announced that the consortium made a formal, all-cash offer to purchase TikTok’s U.S. operations and assets.

“Our offer represents a win-win solution that preserves this vital platform, while addressing legitimate national security concerns,” said Tinsley. The statement did not disclose the amount of the bid.

Kevin O’Leary

Kevin O'Leary Visits "Outnumbered"

Kevin O’Leary, Canadian investor and star of the reality television show Shark Tank, has expressed a strong interest in buying TikTok. He has joined “The People’s Bid for TikTok,” an effort led by Project Liberty Founder Frank McCourt.

On Fox’s America’s Newsroom on Jan. 17, O’Leary said that “$20 billion is on the table. Cash.”

And on social media, the business mogul pitted himself against fellow bidder Donaldson by posting his segment on Fox, the thumbnail reading: “Is MrBeast really the competition?”

“Only one group has the tech to pull this off without breaking a sweat. Guess who?” O’Leary said in his Instagram caption. “We’ve been pitching the solution on Capitol Hill. If this deal happens, it’s going to rewrite the rules of social media power, all on American terms.”

Read More: Why Trump Flipped on TikTok

Trump has stated that he would “like the United States to have a 50% ownership position in a joint venture.”

O’Leary is keen on this idea, but told CNBC that he has concerns. “That 50/50 deal, I would love to work with Trump on, so would every other potential buyer… But the problem with some of these ideas is they are inconsistent with the ruling of the Supreme Court,” he said

Larry Ellison

President Trump Delivers Remarks, Announces Infrastructure Plan At White House

While speaking from the White House, Trump stated that he would like Oracle chief technology officer and cofounder Larry Ellison to buy TikTok.

In front of reporters, he turned to Ellison and said: “Larry, lets negotiate in front of the media. So what I’m thinking about saying to someone is, ‘Buy it, and give half to the United States of America.’”

He then asked Ellison if it sounded “reasonable,” with Ellison responding: “Sounds like a good deal to me, Mr. President.”

Ellison, an ally to the President, previously made a bid for TikTok back in 2020, when Trump pushed for a ban on the platform during his first term at the White House.

On Jan. 25, NPR reported that the Trump Administration is negotiating a deal that involves Oracle and a group of external investors to effectively take over TikTok’s global operations. In the deal, ByteDance would keep a stake in the company, but Oracle—which already provides the foundation of TikTok’s web infrastructure—would oversee the algorithm, data collection, and software updates.

Steven Mnuchin

Key Speakers At The Qatar Economic Forum

Steven Mnuchin, the former U.S. Treasury Secretary who served during Trump’s first term, has re-entered discussions surrounding the purchase of TikTok.

In early 2024, Mnuchin said he was assembling an investor group to acquire the social media platform, back when the bipartisan bill to ban the app was just moving through the House of Representatives.

Mnuchin recently joined host David Faber on CNBC’s Squawk on the Street on Jan. 21, where Faber followed up on Mnuchin’s former interest in TikTok.

“We only put it on hold because it was clear that China was not willing to negotiate. Now that President Trump is willing to look at a deal, obviously we’re going to follow this very closely over the next 75 days,” Mnuchin said. “We’d be very interested in investing [in] the business; it’s a terrific business, and we’d have a technology plan to rebuild the technology.”

Perplexity AI

Photo Illustrations - Stock Pictures

Perplexity is “a free AI-powered answer engine,” that arguably competes with the likes of OpenAI and Google. 

Perplexity AI reportedly made a bid to merge with TikTok on Saturday, Jan. 18, with the understanding it would allow most of ByteDance’s existing investors to retain their equity stake.

On Jan. 26, the AP reported that Perplexity AI presented a new proposal to ByteDance—one that would allow the U.S. government to own up to half of that merged entity after a future initial public offering of at least $300 billion.

Microsoft

In this photo illustration, the Microsoft Corporation logo

Responding to reporters, Trump said on Jan. 27 that Microsoft was also among those in current talks to acquire TikTok’s U.S. business.

Microsoft was among the top bidders for TikTok in 2020, eventually losing to Oracle and Walmart—though a sale then eventually fell through. Microsoft CEO Satya Nadella later described the failed deal as “the strangest thing” he had ever worked on.

TIME has reached out to the respective TikTok deal contenders for comment.

How We Connected One Billion Lives Through Digital Technology

Global communication network

In an increasingly digital world, connectivity is a necessity. Yet, nearly a third of the global population remains offline, unable to access the services vital to participating in our global digital economy and society. The Edison Alliance at the World Economic Forum has worked to change that by delivering digital connectivity and access to financial, healthcare, and education services to those who need them most. Our partnerships with governments, industries, and non-governmental organizations drive lasting systemic change.

[time-brightcove not-tgx=”true”]

The World Economic Forum played a pivotal role in launching and guiding the Alliance’s work, providing a platform for stakeholders to come together and commit to a vision with actionable ideas and plans. CEOs, ministers, and heads of international organizations harnessed the power of public-private partnerships and gathered to discuss the barriers to connectivity and identify scalable solutions.

The 1 Billion Lives Challenge, achieved by the Edison Alliance in 2024, one year ahead of schedule, exemplifies what can be achieved when diverse stakeholders work toward a common goal. Through partnerships with telecom providers, financial institutions, technology companies, and policymakers, the Alliance delivers impactful programs worldwide. In India, we are using digital tools to connect rural communities to vital health services. In Africa and the U.S., mobile banking solutions empower millions of unbanked individuals with access to financial services. In Latin America, digital literacy initiatives opened new educational opportunities for often underrepresented populations.

Each of our efforts underscore the profound impact of digital connectivity. For the rural farmer in Kenya, it means access to real-time market information that can increase yield and revenue. For the student in a remote village in Peru, it means access to online learning platforms and global educational resources. For the small business owner in Indonesia, it means the ability to reach new markets and grow. Connectivity, quite simply, is the key to unlocking potential and reducing inequality.

Achieving the 1 Billion Lives Challenge is not just a milestone, as every life touched is a life improved and a call to further action. It demonstrates that global challenges—no matter how complex—can be addressed when we come together with purpose and determination. But our work is far from over. While one billion people have better and more comprehensive access to our digital world, billions more still lack access to these critical digital tools. And, the adoption of AI and Generative AI tools threatens to further widen that gap. The digital divide remains one of the most pressing issues of our time, and the Alliance is committed to continuing its efforts to close it.

The World Economic Forum will remain a critical organization for advancing our work. It is a place where leaders are not only inspired to think big but are also held accountable for delivering on their commitments. The Forum’s unique structure, which emphasizes multi-stakeholder collaboration, ensures that progress is not just discussed but achieved. It is in this spirit that the EDISON Alliance was born, and this is the spirit that will launch further efforts to expand access to vital resources and opportunities. Our work will continue through new initiatives like the World Economic Forum’s AI for Prosperity and Growth in Africa, launched at this year’s Annual Meeting.

Looking ahead, we see a world where connectivity is available to all who want it. This vision requires sustained effort, innovation, and investment. It requires us to address the structural barriers that perpetuate the digital divide, from affordability and infrastructure to digital literacy and policy frameworks. It requires us to keep asking tough questions and pooling our resources to push the boundaries of what is possible. We call on the public and private sectors to increase their collaboration so we can meet these bold ambitions. Together, we will build a world where no one is left behind in the digital age.

Trump Signs Order Calling for AI Development ‘Free From Ideological Bias’

President Donald Trump speaks to the media after signing Executive Orders in the Oval Office of the White House in Washington, D.C., on Jan. 23, 2025.

President Donald Trump signed an executive order on artificial intelligence Thursday that will revoke past government policies his order says “act as barriers to American AI innovation.”

To maintain global leadership in AI technology, “we must develop AI systems that are free from ideological bias or engineered social agendas,” Trump’s order says.

[time-brightcove not-tgx=”true”]

The new order doesn’t name which existing policies are hindering AI development but sets out to track down and review “all policies, directives, regulations, orders, and other actions taken” as a result of former President Joe Biden’s sweeping AI executive order of 2023, which Trump rescinded Monday. Any of those Biden-era actions must be suspended if they don’t fit Trump’s new directive that AI should “promote human flourishing, economic competitiveness, and national security.”

Read More: 5 Predictions for AI in 2025

Last year, the Biden administration issued a policy directive that said U.S. federal agencies must show their artificial intelligence tools aren’t harming the public, or stop using them. Trump’s order directs the White House to revise and reissue those directives, which affect how agencies acquire AI tools and use them.

Biden’s executive order, the Trump administration said, “established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership.”

Trump’s order also calls for the development of an AI action plan within 180 days. Leading the work will be a small group of White House tech and science officials, including a new Special Advisor for AI and Crypto—a role Trump has given to venture capitalist and former PayPal executive David Sacks.

Trump repealed Biden’s 2023 guardrails for fast-developing AI technology just hours after returning to the White House on Monday.

Read More: Breaking Down All of Trump’s Day 1 Presidential Actions

The new actions threaten to erase some of the Biden administration’s efforts—championed by then-Vice President Kamala Harris—to curb government use of the kinds of AI tools that have been found to unfairly discriminate based on race, gender or disability, from medical diagnosis chatbots spouting false information to face recognition technology tied to wrongful arrests of Black men.

Until Thursday, it wasn’t clear if Trump planned to replace Biden’s signature AI policy with his own order. Trump had also signed executive orders on AI in his previous term, including a 2019 order directing federal agencies to prioritize research and development in AI that is still on the books.

Alondra Nelson, former acting director of the White House Office of Science and Technology Policy under Biden, said Trump’s order seemed “backward looking” because agencies would be tasked with reviewing initiatives “that are already helping people, with an implicit intent to unwind them.”

The Biden administration’s AI policies, she added, were aimed at protecting both innovation and the public.

“In 60 days, we’ll know which Americans’ rights and safety the Trump Administration believes deserves to be protected in the age of AI, and if there will be a level playing field for every technologist, developer, and innovator or just the tech billionaires,” Nelson said.

Much of Biden’s 2023 order set in motion a sprint across government agencies to study AI’s impact on everything from cybersecurity risks to its effects on education, workplaces and public benefits, with an eye on ensuring AI tools weren’t harming people. That work is largely done.

One major piece that remained—until Trump rescinded it Monday—was a requirement that tech companies building the most powerful AI models share details with the government about the workings of those systems before they are unleashed to the public.

The Trump order’s focus on “human flourishing” echoes the language of his campaign’s long-held promise to cancel Biden’s AI policy once back in the White House. It’s also in line with ideas espoused by Trump adviser Elon Musk, who has warned against the dangers of what he calls “woke AI” that reflects liberal biases.

In a statement, Americans for Responsible Innovation, a nonprofit organization, said Trump has “made it clear from day one that his top priority on AI is out-innovating the rest of the world.”

Read More: What to Know About ‘Stargate,’ OpenAI’s New Venture Announced by President Trump

“Today’s executive order is a placeholder until the administration has a chance to develop a full strategy for executing that vision,” said the organization’s executive director, Eric Gastfriend.

Agencies had already frozen work on AI policies initiated by the last administration following Trump’s repeal of Biden’s executive order on Monday, Gastfriend said.

“This new instruction shouldn’t come as a surprise,” he said.

Why You May Automatically Be Following Trump and Vance Now

22 January 2025 at 21:25
Meta To Replace Fact Checkers

If you were once following former President Biden and Vice President Kamala on Instagram, you may now be following President Trump and Vice President J.D. Vance. The change, which was met with anger and confusion from some, comes because the official accounts for the President of the United States (@POTUS), and Vice President (@VP), for instance, automatically get turned over to the next Administration. 

[time-brightcove not-tgx=”true”]

Official accounts designating leaders or important members of an administration are separate from Trump or Vance’s personal social media accounts. But some users claimed they were unable to unfollow Trump, and blamed Instagram. Meta, which owns Instagram, has said this is normal procedure.

“People were not made to automatically follow any of the official Facebook or Instagram accounts for the President, Vice President or First Lady. Those accounts are managed by the White House so with a new administration, the content on those Pages changes,” Andy Stone, a communications spokesperson for Meta, shared on X on Wednesday. “This is the same procedure we followed during the last presidential transition. It may take some time for follow and unfollow requests to go through as these accounts change hands.”   

Under the practice, official accounts for higher positions of power retain their followers but the posts on the page are wiped clean. This also includes accounts on X and Facebook. President Barack Obama, who was the first president to have an official presidential account on X, documented the switch in a 2016 blogpost.

Archives of the previous administration’s social media posts are preserved with the National Archives and Records Administration (NARA). 

Other official accounts, such as those for the Press Secretary, have also transitioned. 

The online frenzy related to following such accounts came as Meta CEO Mark Zuckerberg was sitting behind Trump t at the Inauguration with other tech CEOs Elon Musk and Jeff Bezos. Zuckerberg also recently donated $1 million dollars to Trump’s inaugural fund. 

Trump’s accounts on Facebook and Instagram were previously temporarily suspended after the Jan. 6, 2021 Capitol riots.

Why Trump’s Meme Coins Have Alarmed Both Crypto Insiders and Legal Experts

22 January 2025 at 20:48
Trump Bitcoin

When Donald Trump won the presidency in November, many crypto fanatics celebrated, based on his promises to the industry that he would prioritize deregulation and legitimize crypto entrepreneurs. Days before his inauguration, industry heavyweights gathered in Washington for the Crypto Ball, celebrating their newly minted status as D.C. insiders. 

But during the event, Trump shocked nearly the entire room by posting online about the launch of a new cryptocurrency called TRUMP. This new currency, a so-called meme coin, has no inherent value, but rather fluctuates in price as people buy and sell the coin. Trump’s fans and opportunistic day traders have generated billions of dollars in sales, driven by loyalty, hype, and the chance to make a quick buck. All of these trades have made the coin’s creators—affiliate companies of the Trump organization—billions of dollars on paper. A day after its release, Melania Trump announced her own meme coin, which also rose and fell in crazed spurts. On Wednesday, TRUMP was the 25th most valuable cryptocurrency in the world, according to CoinMarketCap—although its price of about $43 was well off its $75 high. 

[time-brightcove not-tgx=”true”]

Read More: What Trump’s Win Means for Crypto.

Trump’s meme coins brought a surge of attention to crypto and many newcomers into the space. To some, the coins signaled Trump’s commitment to crypto and to spurring its growth. But many more in the crypto world responded with revulsion to what they saw as a cash grab, and a way for Trump to directly profit off of his followers. Trump’s team holds at least 80% of the coin’s supply, giving them vast power to control its price. While they are not allowed to sell off their holdings for months, doing so would crash the market and leave regular users with a loss. 

Crypto insiders worry that the coins will make the public even leerier of an industry already filled with scams and bad-faith actors. “The crypto sector put someone in power whose first act is to emphasize and take advantage of the opportunity for grift within crypto,” says Angela Walch, a crypto researcher and writer. “And that’s just embarrassing.”

Trump has downplayed his role in launching the coin, saying at a Jan. 21 press conference: “I don’t know much about it other than I launched it.” The Trump Organization did not immediately respond to a request for comment. A White House press officer declined to comment.

But elected officials and legal experts are raising ethical and geopolitical concerns about the tokens, which they say could serve as a vehicle for bribery and conflicts of interest. “These coins open a channel for him to receive financial benefits from foreign adversaries and to prioritize his personal interests, to the collective detriment of Americans,” says Puja Ohlhaver, a lawyer affiliated with Harvard’s Allen Lab for Democracy Renovation. 

What are meme coins? 

TRUMP and MELANIA are meme coins: cryptocurrencies that are essentially created by entrepreneurs out of thin air by writing code to deploy on a blockchain. Their worth comes from how much people believe in them and buy them. In order to generate excitement, the teams behind such coins often market them using popular memes which can be shared and iterated upon on social media. If memes on social media can propel culture, creativity, and even ideology, the thinking goes, then why shouldn’t they be worth something financially as well? 

Dogecoin and Shiba Inu are two examples, with Dogecoin particularly propelled by Elon Musk, whose tweets about the coin have led to price spikes. The lack of inherent value makes meme coins especially volatile and speculative, which, to some, is part of their appeal: If investors buy at the right time, they can make a lot of money. Conversely, they can lose everything extremely fast if they buy in at the market’s top. Meme coins have also been the vehicle for alleged scams, in which investors lost significant sums. 

Trump’s admirers have often wielded  memes as a marketing tool. During his presidential campaign, a team of content creators flooded social media with pro-Trump meme content. Last summer, unofficial Trump meme coins with names like Pepe (TRUMP) and Maga People Token (PEOPLE) rose and fell, with some bettors treating them as proxies for his chances of victory. 

Trump also has a history of using crypto to make money. He started selling NFT trading cards in 2022, and has made millions from them, according to financial disclosure documents. In September, he launched World Liberty Financial, a cryptocurrency platform which is not yet live. And in 2025, meme coins are perhaps the easiest way for aspiring crypto entrepreneurs to make money, fast. 

TRUMP starts trading 

On Jan. 18, two days before taking the oath of office, Trump launched his token via CIC Digital LLC, an affiliate of the Trump Organization, while the Crypto Ball was in full swing. The move took the industry by surprise. Nick O’Neill, a crypto entrepreneur at the event—which also featured appearances from Snoop Dogg and Speaker Mike Johnson—posted a video on X saying that very few people there were aware of the token. 

The next day saw a mad rush to buy and sell the token, causing all sorts of spillover effects. Solana, the blockchain supporting the token, and Coinbase, an exchange used to trade the coin, both experienced hours-long transaction delays. “We were not anticipating this level of surge,” Coinbase CEO Brian Armstrong wrote on Twitter

Within a day, the team controlling the token, led by CIC Digital, owned tokens worth some $51 billion on paper. (This figure isn’t realistic, though, because the more they tried to cash out into actual dollars, the more the price would decrease.). Later that day, however, Melania Trump released her own meme coin, MELANIA, which actually deflated TRUMP’s market cap by billions of dollars, as traders appeared to sell their holdings to buy into the new coin. Within an hour of MELANIA’s launch, TRUMP fell from above $70 to around $45. A fake BARRON memecoin unassociated with Trump’s youngest son also accumulated a $460 million market cap before crashing 95%.

Some of Trump’s staunchest supporters from the crypto world accused him of predatory behavior in connection with the coin’s launch. Crypto is supposed to champion the concept of decentralization; the President’s team controls at least 80% of the TRUMP token’s supply. Another blockchain analytics company, Bubblemaps, found that 89% of MELANIA’s token supply was in a single crypto wallet. Conor Gregor, a Coinbase executive, wrote on Saturday that Trump’s team had made $58 million in trading fees alone. 

“Trump’s credibility has been totally destroyed,” wrote Michael A. Gayed, an investment manager. Anthony Scaramucci, Trump’s former White House communications director and a crypto evangelist, wrote: “No one can honestly think this is good for our society.” 

“There’s a lot of soul searching in the industry right now,” says Walch. “Great, we got power, but was it serving any purpose that we originally set out to achieve?”

Concerns over ethics and national security

Critics outside crypto also raised ethical concerns. Trump now has a direct stake in an industry that he is in charge of regulating. (The controlling companies, which are affiliates of Trump’s business, wrote that Trump tokens “are not investments or securities but are an ‘expression of support.’”) The President’s crypto windfall, critics suggest, disincentivize him to crack down on the industry, which could cause his tokens to decrease in value by billions of dollars. Representative Ro Khanna, a California Democrat who is one of Congress’s foremost crypto supporters, wrote on X that “Elected officials must be barred from having meme coins by law.” 

Some critics worry that these tokens represent a threat to national security, because they allow foreign agents to buy large amounts of the token as leverage over Trump’s policy decisions. These agents could buy tokens to win Trump’s favor—or threaten to sell them off, which could crash the token’s price. They could also use cryptographic techniques to conceal their identity to everyone in the world but Trump, says Ohlhaver, at the Allen Lab. 

The Founding Fathers tried to prevent this sort of conflict of interest with the Emoluments Clauses in the Constitution, which prevent a President from using their office to enrich themselves. (At the time, gift giving was a common corrupt practice among European rulers and diplomats.) Some contend that the fact that Trump’s token launch happened before he was sworn into office means he was operating as a private citizen. “ It’s less complicated for them to launch these BEFORE he officially becomes President,” wrote the crypto journalist Zack Guzmán on X. “Claiming Trump is profiting from the Presidency and violating the Emoluments Clause would have been far easier if not.”

But Ohlhaver contends that as long as Trump owns a share of tokens, there’s a significant conflict of interest. “He still owns tokens, which will appreciate in price if a foreign adversary pumps it,” she says. Ohlhaver also says that the Trump meme coin threatens our public understanding of money at its core. “With the rise of social media and global social networks, it’s very easy to leverage your status and influence to make a new form of money and legitimize that,” she says. “It’s important for us to maintain our national public goods and make sure that they serve our common interests, rather than the narrow interests of an elite who will benefit tremendously at the expense of everybody else.” 

Andrew R. Chow’s book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.

What to Know About ‘Stargate,’ OpenAI’s New Venture Announced by President Trump

22 January 2025 at 17:50
The Inauguration Of Donald J. Trump As The 47th President

President Donald Trump on Tuesday announced a $500 billion joint venture between OpenAI, Softbank, MGX and Oracle to build new datacenters to power the next wave of artificial intelligence (AI) – in an early signal that his Administration would embrace the technology.

The plans, which predate the Trump Administration and involve no U.S. government funds, would result in the construction of large datacenters on U.S. soil containing thousands of advanced computer chips required to train new AI systems.

[time-brightcove not-tgx=”true”]

Trump cast his support for the venture in part as a matter of national competitiveness. “We want to keep it in this country; China’s a competitor,” Trump said of AI. “I’m going to help a lot through emergency declarations – we have an emergency, we have to get this stuff built.” 

The message echoed recent talking points by the heads of AI companies like Sam Altman of OpenAI, who flanked him during the White House announcement. Altman has argued more vocally in recent months that the U.S. must race to build the energy and datacenter infrastructure in order to create powerful AI before China. 

The intent is to build datacenters on American soil, so that the U.S. retains sovereignty over the AI models that are created and run there. Some of the financing for Stargate, however, comes from abroad, via MGX, an investor owned by an Abu Dhabi sovereign wealth fund, and Softbank, which is Japanese. 

OpenAI and Oracle have been working on building out datacenter capacity in the United States since long before Trump’s inauguration, and construction is reportedly already underway on some of the facilities connected to Stargate. The new President’s blessing, however, is a win both for OpenAI – which like all tech companies has attempted to position itself in Trump’s favor – and for Trump himself, who has seized on AI as a means for strengthening the U.S. economy and achieving dominance over China.

Stargate also appears to mark an end to OpenAI’s exclusive cloud computing partnership with Microsoft, meaning the startup is now free to train its models with other providers. In return for early investment, OpenAI had agreed to train its AIs only on Microsoft’s systems. But the startup has chafed in the past at what insiders felt was Microsoft’s inability to supply it with enough computing power, according to reports. Microsoft remains a large investor in OpenAI, and gains a share of its revenue.

What could Stargate mean?

The goal behind Stargate is to create the infrastructure required to build even more powerful AI systems – systems that could perform most economically valuable tasks better and faster than humans could, or that could make new scientific discoveries. Many AI investors and CEOs believe this technology, sometimes referred to as artificial general intelligence, is attainable within the next five years or fewer.

But to get there, those AIs need to first be trained. This presents a problem, because the bigger an AI you want to train, the more interlinked chips you need in a datacenter, and the larger the electricity capacity of that datacenter needs to be. Currently, experts say, AI’s performance is bottlenecked by these two factors, especially power capacity.

Stargate would mean not only the construction of new datacenters to house the latest chips, but also the construction of new energy infrastructure that could supply those datacenters with the gargantuan amount of power needed for an AI training run. Those runs can last for months, with chips running day and night to mold a neural network based on connections within a vast corpus of data. 

“They have to produce a lot of electricity, and we’ll make it possible for them to get that production done very easily, at their own plants if they want – at the AI plant they will build their own energy generation and that will be incredible,” Trump said Tuesday. “It’s technology and artificial intelligence, all made in the USA.”

Much of this electricity is likely to come from fossil fuels. Trump has committed to “unleash” oil and gas drilling, and has moved to block the grid’s transition to renewable energy. To cope with the rising demand by U.S. data centers for electricity, utilities companies have delayed retiring coal-fired power plants and have added new gas plants.

Will Stargate happen?

It already is. Construction has reportedly already begun on a datacenter in Abilene, Texas, that will house part of the Stargate project. But not all of the $500 billion pledged for the joint venture is likely to be available all at once. Of that figure, OpenAI said in a statement that Stargate would “begin deploying” only a fifth, $100 billion, immediately. The rest will be deployed over the next four years.

Stargate’s announcement led to a rare moment of disharmony between Trump and his most powerful political cheerleader, Elon Musk. “They don’t actually have the money,” Musk posted on X shortly after the announcement. “SoftBank has well under $10 [billion] secured. I have that on good authority.” 

Musk has a long and fractious history with Altman. The pair co-founded OpenAI together, but Musk left in 2019 after reportedly mounting a failed bid to become CEO; he now owns the rival AI company xAI and is suing Altman, accusing him of reneging on OpenAI’s founding principles. Altman denied Musk’s allegations on X, inviting him to come to visit the first site already under construction. “This is great for the country. I realize what is great for the country isn’t always what’s optimal for your companies, but in your new role I hope you’ll mostly put 🇺🇸 first,” he wrote. He had earlier written: “I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time.” An OpenAI spokesperson did not respond to a request for comment.

Regardless of the size of Stargate’s checking account, it would be foolish to bet against a massive surge in datacenter construction on U.S. soil. Tech companies are already investing billions into the construction of facilities where they can train their next AI systems. And with Trump in the oval office, it appears they have succeeded in convincing the highest levels of government that building more AI infrastructure is an urgent national security priority. “We wouldn’t be able to do this without you Mr. President,” Altman said at the White House on Tuesday, addressing Trump. “And I’m thrilled that we get to.”

‘Big Money and High Quality People’: Stargate Joint Venture to Invest in U.S. AI Infrastructure

22 January 2025 at 03:40
President Donald Trump, accompanied by (L-R) Oracle CTO Larry Ellison, SoftBank CEO Masayoshi Son, and OpenAI CEO Sam Altman, speaks during a news conference in the Roosevelt Room of the White House in Washington, D.C., on Jan. 21, 2025.

WASHINGTON — President Donald Trump on Tuesday talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.

[time-brightcove not-tgx=”true”]

The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House. The initial investment is expected to be $100 billion and could reach five times that sum.

“It’s big money and high quality people,” said Trump, adding that it’s “a resounding declaration of confidence in America’s potential” under his new administration.

Joining Trump fresh off his inauguration at the White House were Masayoshi Son of SoftBank, Sam Altman of OpenAI and Larry Ellison of Oracle. All three credited Trump for helping to make the project possible, even though building has already started and the project goes back to 2024.

“This will be the most important project of this era,” said Altman, CEO of OpenAI.

Ellison noted that the data centers are already under construction with 10 being built so far. The chairman of Oracle suggested that the project was also tied to digital health records and would make it easier to treat diseases such as cancer by possibly developing a customized vaccine.

“This is the beginning of golden age,” said Son, referencing Trump’s statement that the U.S. would be in a “golden age” with him back in the White House.

Son, a billionaire based in Japan, already committed in December to invest $100 billion in U.S. projects over the next four years. He previously committed to $50 billion in new investments ahead of Trump’s first term, which included a large stake in the troubled office-sharing company WeWork.

While Trump has seized on similar announcements to show that his presidency is boosting the economy, there were already expectations of a massive buildout in data centers and electricity plants needed for the development of AI, which holds the promise of increasing productivity by automating work but also the risk of displacing jobs if poorly implemented.

Read More: 5 Predictions for AI in 2025

The initial plans for Stargate go back to the Biden administration. Tech news outlet The Information first reported on the project in March 2024. OpenAI has long relied on Microsoft data centers to build its AI systems, but it has increasingly signaled an interest in building its own data centers.

OpenAI wrote in a letter to the Biden administration’s Commerce Department last fall that planning and permitting for such projects “can be lengthy and complex, particularly for energy infrastructure.”

Other partners in the project include Microsoft, investor MGX and the chipmakers Arm and NVIDIA, according to separate statements by Oracle and OpenAI.

The push to build data centers predates Trump’s presidency. Last October, the financial company Blackstone estimated that the U.S. would see $1 trillion invested in data centers over five years, with another $1 trillion being committed internationally.

Those estimates for investments suggest that much of the new capital will go through Stargate as OpenAI has established itself as a sector leader with the 2022 launch of its ChatGPT, a chatbot that captivated the public imagination with its ability to answer complex questions and perform basic business tasks.

The White House has put an emphasis on making it easier to build out new electricity generation in anticipation of AI’s expansion, knowing that the United States is in a competitive race against China to develop a technology increasingly being adopted by businesses.

Read More: How China Is Advancing in AI Despite U.S. Chip Restrictions

Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.

CBS News first reported that Trump would be announcing the AI investment.

Trump supporter Elon Musk, worth more than $400 billion, was an early investor in OpenAI but has since challenged its move to for-profit status and has started his own AI company, xAI. Musk is also in charge of the “Department of Government Efficiency” created formally on Monday by Trump with the goal of reducing government spending.

Read More: How Elon Musk Became a Kingmaker

Trump previously in January announced a $20 billion investment by DAMAC Properties in the United Arab Emirates to build data centers tied to AI.

—AP reporter Matt O’Brien contributed to this report from Providence, Rhode Island.

A New Group Aims to Protect Whistleblowers In the Trump Era

21 January 2025 at 19:21

The world needs whistleblowers, perhaps now more than ever. But whistleblowing has never been more dangerous.

Jennifer Gibson has seen this problem develop up close. As a whistleblower lawyer based in the U.K., she has represented concerned insiders in the national security and tech worlds for more than a decade. She’s represented family members of civilians killed by Pentagon drone strikes, and executives from top tech companies who’ve turned against their billionaire bosses. 

[time-brightcove not-tgx=”true”]

But for today’s whistleblowers, Gibson says, both the stakes and the risks are higher than ever. President Trump has returned to the White House and wasted no time using the might of the state to retaliate against perceived enemies. This time, Trump boasts the support of many of Silicon Valley’s richest moguls, including Elon Musk and Mark Zuckerberg, who have overhauled their social-media platforms to his benefit. Meanwhile, tech companies are racing to build AI “superintelligence,” a technology that could turbocharge surveillance and military capabilities. Politics and technology are converging in an environment ripe for abuses of power. 

Gibson is at the forefront of a group of lawyers trying to make it safer for conscientious employees to speak out. She’s the co-founder of Psst, a nonpartisan, nonprofit organization founded in September and designed to “collectivize” whistleblowing.

On Monday, to coincide with Trump’s inauguration, Psst launched what it calls the “safe”: a secure, online deposit box where tech or government insiders can share concerns of potential wrongdoing. Users can choose to speak with a pro-bono lawyer immediately, anonymously if they prefer. Or they can ask Psst’s lawyers to do nothing with their information unless another person turns up with similar concerns. If that second party emerges, and both give their consent, Psst is able to match the two together to discuss the issue, and potentially begin a lawsuit.

Read More: The Twitter Whistleblower Needs You To Trust Him.

Gibson says the aim is to overcome the “first mover problem” in whistleblowing: that even if several insiders privately share the same concerns, they may never find out about each other, because nobody wants to risk everything by being the first to speak up. “The chances are, if you’re a tech worker concerned about what the company is doing, others are concerned as well,” Gibson says. “But nobody wants to be first.”

Psst’s model doesn’t negate all the dangers of whistleblowing. Even if multiple insiders share concerns through its “safe,” they still face the prospect of retaliation if they eventually speak out. The safe is end-to-end encrypted, but a lawyer has access to the decryption key; an adversary could sue Psst in an attempt to obtain it. Because it’s browser-based, Psst’s safe is marginally more vulnerable to attack than an app like Signal. And while information stored in the safe is protected by legal privilege, that’s only a protection against entities who respect legal norms. Gibson acknowledges the limitations, but argues the status quo is even riskier. “We need new and creative ways of making it easier and safer for a larger number of people to collectively speak out,” she says. If we continue to rely on the shrinking group of people willing to blow up their careers to disclose wrongdoing, she adds, “we’re going to be in a lot of trouble, because there aren’t going to be enough of them.”


In her previous role at the whistleblower protection group The Signals Network, Gibson worked on providing independent legal and psychosocial support to Daniel Motaung, a Meta whistleblower who first shared his story in TIME. Before turning her focus to the tech industry, Gibson spent 10 years at the U.K.-based human-rights group Reprieve, where her title was “Head of Extrajudicial Killings.” She focused on U.S. military drone strikes in the war on terror, which reports indicate had a higher civilian death rate than Washington publicly admitted. “I spent 10 years watching national security whistleblowers risk everything and suffer significant harm for disclosing information that the American public, and quite frankly the world, had a right to know,” Gibson says. “In my opinion, we as civil society failed to really protect the whistleblowers who came forward. We tried to get accountability for the abuses based on the information they disclosed—and many of them went to jail with very little media attention.”

Gibson also noticed that in cases where whistleblowers came forward as a group, they tended to fare better than when they did so alone. Speaking out against a powerful entity can be profoundly isolating; many of your former colleagues stop talking to you. One of Psst’s first cases is representing a group of former Microsoft employees who disclosed that the tech giant was pitching its AI to oil companies at the same time as it was also touting its ability to decarbonize the economy. “The benefit of that being a group of whistleblowers was the company can’t immediately identify who the information came from, so they can’t go after one individual,” Gibson says. “When you’re with a collective, even if you’re remaining anonymous, there are a handful of people you can reach out to and talk to. You’re in it together.”

Psst’s safe is based on Hushline, a tool designed by the nonprofit Science & Design Inc., as a simpler way for sources to reach out to journalists and lawyers. It’s a one-way conversation system, essentially functioning as a tip-line. Micah Lee, an engineer on Hushline, says that the tool fills a gap in the market for an encrypted yet accessible central clearinghouse for sensitive information. “It still fills an important need for the type of thing that Psst wants to do,” he says. “[But] it’s filling a space that has some security and usability tradeoffs.” For follow-up conversations, users will have to move over to an encrypted messaging app like Signal, which is marginally safer because users don’t have to trust the server that a website is hosted on, nor that your own browser hasn’t been compromised.

Read More: Inside Frances Haugen’s Decision To Take On Facebook.

For now, Psst’s algorithm for detecting matches is fairly simple. Users will be able to select details about their industry, employer, and the subject of their concerns from several drop-down boxes. Then Psst lawyers, operating under legal privilege, check to see if there is a match with others. Gibson expects the system’s capabilities to evolve. She’s sketched out a blueprint for another version that could use closed, secure large language models to perform the matching automatically. In theory, this could allow whistleblowers to share information with the knowledge that it would only ever be read by a human lawyer in the case that a different person had shared similar concerns. “The idea is to remove me from the process so that even I don’t see it unless there’s a match,” Gibson says. 

At the same time, technological advancements have made it easier for governments and tech companies to clamp down on whistleblowing by siloing information, installing monitoring software on employees’ phones and computers, and using AI to check for anomalous behaviors. Psst’s success will depend on whether tech and government insiders trust it enough in this environment to begin depositing tips. Even if the system works as intended, whistleblowers will need extraordinary courage to come forward. With tech and government power colliding, and with AI especially getting more and more powerful, the stakes couldn’t be higher. “We need to understand what is happening inside of these frontier AI labs,” Gibson says. “And we need people inside those companies to feel protected if they feel like they need to speak out.”

Elon Musk Comments on Controversial Clip of Him Giving a Straight-Arm Salute

21 January 2025 at 04:15
Tesla and SpaceX CEO Elon Musk gestures as he speaks during the inaugural parade inside Capital One Arena, in Washington, DC, on Jan. 20, 2025.

Elon Musk was visibly bursting with excitement after President Donald Trump’s inauguration. At a celebratory rally on Monday at Capitol One Arena in Washington, he pumped his fist in the air and bellowed a “Yes!” to the raucous crowd. But another gesture soon after has sent observers questioning whether Musk was expressing just joy, or something more insidious.

[time-brightcove not-tgx=”true”]

“I just want to say thank you for making it happen,” the Tesla and SpaceX CEO and X owner told the audience of Trump supporters. Musk then slapped his chest with his right hand, before flinging it diagonally upwards, palm face down. He turned around to audience members behind the podium, and repeated the gesture. “My heart goes out to you,” the 53-year-old billionaire said, palm back on his chest.

But the quick, salute-like movement drew attention as swiftly as it happened. In live commentary, CNN anchor Erin Burnett pointed the gesture out, and co-anchor Kasie Hunt noted, “It’s not something that you typically see in American political rallies.”

Social media swarmed with confusion—and theories. “WTF?? What did Elon Musk just do??” one X user asked. Streamer and leftist political commentator Hasan Pike posted: “did elon musk just hit the roman salute at his inauguration speech?”

Other users immediately drew comparison to a Nazi salute popularly used by Adolf Hitler. Public broadcaster PBS shared the clip on social media and reported it as “what appeared to be a fascist salute.” Musician and environmental activist Bill Madden posted: “If giving the Nazi ‘Sieg Heil’ salute was an Olympic event like gymnastics, Elon Musk would’ve received a perfect score of 10. Musk even nailed the facial expression. Seriously, Hitler would be jealous.”

Ruth Ben-Ghiat, a history professor at New York University who self-identified as a “historian of fascism,” posted on Bluesky: “It was a Nazi salute and a very belligerent one too.” Israeli newspaper Haaretz reported the gesture as a “Roman salute,” and said it will “only cause greater alarm among Jews who have expressed concern with the billionaire’s proximity to Trump’s inner circle while platforming views prominent with [the] far-right.”

Rolling Stone magazine reported that neo-Nazis and right-wing extremists in America and abroad were “abuzz” after the gesture, citing celebratory captions of the clip from far-right figures such as “Incredible things are happening already lmao” and “Ok maybe woke really is dead.”

Rep. Jerry Nadler (D-N.Y.) posted on X that he “never imagined we would see the day when what appears to be a Heil Hitler salute would be made behind the Presidential seal. This abhorrent gesture has no place in our society and belongs in the darkest chapters of human history.”

While speaking on stage at the World Economic Forum’s annual meeting in Switzerland, German Chancellor Olaf Scholz was asked by a member of the press about his reaction to Musk’s gesture, to which he responded: “We have freedom of speech in Europe and in Germany, everyone can say what he wants, even if he is a billionaire. What we do not accept is if this is supporting extreme right positions.” (In Germany, performing the Nazi salute is illegal.)

However, some others have come to Musk’s defense. 

The Anti-Defamation League (ADL), an organization whose mission is to combat antisemitism and which describes a “Hitler salute” as one with an “outstretched right arm with the palm down,” posted on X shortly after the incident that the billionaire Trump mega-donor “made an awkward gesture in a moment of enthusiasm, not a Nazi salute,” and that “all sides should give one another a bit of grace, perhaps even the benefit of the doubt, and take a breath.”

Eyal Yakoby, a University of Pennsylvania graduate who campaigned against antisemitism on college campuses, called it “a stupid hand gesture” in a post on X, adding:  “Anyone trying to portray him as a Nazi is intentionally misleading the public.” 

Aaron Astor, a history professor at Maryville College in Tennessee, posted: “This is a socially awkward autistic man’s wave to the crowd where he says ‘my heart goes out to you.’” (Musk has previously disclosed that he has Asperger’s syndrome, also known as autism spectrum disorder.) Newsweek opinion editor Batya Ungar-Sargon offered a similar explanation, adding: “We don’t need to invent outrage.”

Musk has previously been criticized for allowing pro-Nazi accounts to flourish on his platform and for posting right-wing memes and seemingly supporting antisemitic conspiracy theories, which led to an exodus of advertisers from X in 2023, and for recently supporting Germany’s far-right populist AfD party, whose leaders have made “antisemitic, anti-Muslim and anti-democratic” statements, according to the ADL.

The debate over Musk’s latest move has added fuel to other ongoing feuds, too.

Progressive firebrand Rep. Alexandria Ocasio-Cortez (D-N.Y.) targeted the ADL, which has been accused by the left of turning a blind eye toward Trump and his allies, in a post on X, saying: “Just to be clear, you are defending a Heil Hitler salute that was performed and repeated for emphasis and clarity. People can officially stop listening to you as any sort of reputable source of information now. You work for them. Thank you for making that crystal clear to all.”

Staunch Trump supporter Rep. Marjorie Taylor Greene (R-Ga.), meanwhile, threatened PBS by saying she would call it to testify before the oversight subcommittee she chairs that is set to work with the newly-formed Department of Government Efficiency (DOGE), which Musk oversees. “I look forward to PBS @NewsHour coming before my committee and explaining why lying and spreading propaganda to serve the Democrat party and attack Republicans is a good use of taxpayer funds,” Greene posted.

Musk did not directly address the controversy Monday night, though he replied to a number of posts on X about it—thanking the ADL, mocking Ocasio-Cortez, and agreeing with a post that said: “Can we please retire the calling people a Nazi thing? It didn’t work during the election, it’s not working now, it’s tired, boring, and old material, you’ve burned out its effect, people don’t feel shocked by it anymore, the wolf has been cried too many times.”

Musk also reposted a video clip of his rally remarks that included the moment he made the questionable gesture, commenting: “The future is so exciting!!”

On Wednesday, Musk posted, “The radical leftists are really upset that they had to take time out of their busy day praising Hamas to call me a Nazi.” And on Thursday, he thanked Israeli Prime Minister Benjamin Netanyahu, who had posted that Musk was “falsely smeared” and is a “friend of Israel.”

Shortly after, Musk posted a series of Holocaust- and Nazi-themed puns—”Some people will Goebbels anything down! … Bet you did nazi that coming”—and, for that, he did end up earning the condemnation of the ADL. By Thursday night, he seemed exasperated, posting: “If I see one more damn Nazi salute in my feed, I’m gonna lose my mind,” before joking, as the owner of the platform: “This algorithm sucks!!”

—Ayesha Javed/Davos contributed reporting.

Bitcoin Surges Ahead of Trump’s Inauguration in Anticipation of Crypto-Friendly Policies

20 January 2025 at 10:00
A photo illustration of Donald Trump's X Page on a smartphone with a Bitcoin logo in the background.

WASHINGTON — The price of bitcoin surged to over $109,000 early Monday, just hours ahead of President-elect Donald Trump’s inauguration, as a pumped up cryptocurrency industry bets he’ll take action soon after returning to the White House.

Once a skeptic who said a few years ago that bitcoin “seems like a scam,” Trump has embraced digital currencies with a convert’s zeal. He’s launched a new cryptocurrency venture and vowed on the campaign trail to take steps early in his presidency to make the U.S. into the “crypto capital” of the world.

[time-brightcove not-tgx=”true”]
[video id=rgxKaifi autostart="viewable"]

His promises include creating a U.S. crypto stockpile, enacting industry-friendly regulation and even appointing a crypto “czar” for his administration.

“You’re going to be very happy with me,” Trump told crypto-enthusiasts at a bitcoin conference last summer.

Read More: What Trump’s Win Means for Crypto

Bitcoin is the world’s most popular cryptocurrency and was created in 2009 as a kind of electronic cash uncontrolled by banks or governments. It and newer forms of cryptocurrencies have moved from the financial fringes to the mainstream in wild fits and starts.

The highly volatile nature of cryptocurrencies, as well as their use by criminals, scammers and rogue nations, has attracted plenty of critics, who say the digital currencies have limited utility and often are just Ponzi schemes.

But crypto has so far defied naysayers and survived multiple prolonged price drops in its short lifespan. Wealthy players in the crypto industry, which felt unfairly targeted by the Biden administration, spent heavily to help Trump win last November’s election. Bitcoin has surged in price since Trump’s victory, topping $100,000 for the first time last month before briefly sliding down to about $90,000. On Friday, it rose about 5%. It jumped more than $9,000 early Monday, according to CoinDesk.

Two years ago, bitcoin was trading at about $20,000.

Trump’s picks for key cabinet and regulatory positions are stocked with crypto supporters, including his choice to lead the Treasury and Commerce departments and the head of the Securities and Exchange Commission.

Read More: Why CEOs Are Cheering Donald Trump’s Pick for Treasury Secretary

Key industry players held a first ever “Crypto Ball” on Friday to celebrate the first “crypto president.” The event was sold out, with tickets costing several thousand dollars.

Here’s a look at some detailed action Trump might take in the early days of his administration:

Crypto council

As a candidate Trump promised that he would create a special advisory council to provide guidance on creating “clear” and “straightforward” regulations on crypto within the first 100 days of his presidency.

Details about the council and its membership are still unclear, but after winning November’s election, Trump named tech executive and venture capitalist David Sacks to be the administration’s crypto “czar.” Trump also announced in late December that former North Carolina congressional candidate Bo Hines will be the executive director of the “Presidential Council of Advisers for Digital Assets.”

At last year’s bitcoin conference, Trump told crypto supporters that new regulations “will be written by people who love your industry, not hate your industry.” Trump’s pick to lead the SEC, Paul Atkins, has been a strong advocate for cryptocurrencies.

Read More: How the Crypto World Learned to Love Donald Trump, J.D. Vance, and Project 2025

Crypto investors and companies chafed as what they said was a hostile Biden administration that went overboard in unfair enforcement actions and accounting policies that have stifled innovation in the industry—particularly at the hands of outgoing SEC Chairman Gary Gensler.

“As far as general expectations from the Trump Administration, I think one of the best things to bet on is a tone change at the SEC,” said Peter Van Valkenburgh, the executive director of the advocacy group Coin Center.

Gensler, who is set to leave as Trump takes office, said in a recent interview with Bloomberg that he’s proud of his office’s actions to police the crypto industry, which he said is “rife with bad actors.”

Strategic bitcoin reserve

Trump also promised that as president he’ll ensure the U.S. government stockpiles bitcoin, much like it already does with gold. At the bitcoin conference earlier this summer, Trump said the U.S. government would keep, rather than auction off, the billions of dollars in bitcoin it has seized through law enforcement actions.

Crypto advocates have posted a draft executive order online that would establish a “Strategic Bitcoin Reserve” as a “permanent national asset” to be administered by the Treasury Department through its Exchange Stabilization Fund. The draft order calls for the Treasury Department to eventually hold at least $21 billion in bitcoin.

Republican Sen. Cynthia Lummis of Wyoming has proposed legislation mandating the U.S. government stockpile bitcoin, which advocates said would help diversify government holdings and hedge against financial risks. Critics say bitcoin’s volatility make it a poor choice as a reserve asset.

Creating such a stockpile would also be a “giant step in the direction of bitcoin becoming normalized, becoming legitimatized in the eyes of people who don’t yet see it as legitimate,” said Zack Shapiro, an attorney who is head of policy at the Bitcoin Policy Institute.

Ross Ulbricht

At the bitcoin conference earlier this year, Trump received loud cheers when he reiterated a promise to commute the life sentence of Ross Ulbricht, the convicted founder of the drug-selling website Silk Road that used crypto for payments.

Ulbricht’s case has energized some crypto advocates and Libertarian activists, who believe government investigators overreached in building their case against Silk Road.

❌
❌