Reading view
The Next AI Battle: Who Can Get the Most Nvidia Chips in One Place
These Startups Are Finally Bringing EV Chargers to America's Cities
Amazon Web Services Launches Quantum-Computing Advisory Program
Amazon Invests an Additional $4 Billion in Anthropic, an OpenAI Rival
U.K. Competition Watchdog Recommends Investigating Apple, Google Mobile Ecosystems
EU Drops Probe of Apple's Treatment of Rival Audiobook, Ebook Developers in App Store
Tech, Media & Telecom Roundup: Market Talk
Has AI Progress Really Slowed Down?
For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasn’t merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements—regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term “scaling laws,” which has since become a touchstone of the industry.
[time-brightcove not-tgx=”true”]This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today’s articulate chatbots.
But now, that bigger-is-better gospel is being called into question.
Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same “intelligence improvements.”
What are tech companies saying?
Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said “we haven’t seen any signs of deviations from scaling laws.” OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Google’s Gemini model took GPT-4o’s top spot on a popular AI-performance leaderboard, the company’s CEO, Sundar Pichai posted to X saying “more to come.”
Read more: The Researcher Trying to Glimpse the Future of AI
Recent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the company’s largest model, Opus, which has not received such updates. In June, the company said Opus would be updated “later this year,” but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company’s larger Gemini Ultra model has yet to receive an update. OpenAI’s recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called “GPT-4o with reasoning” internally, suggesting the underlying model is similar in scale to GPT-4.
Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, “we’ve failed deeply as a company,” Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskever’s new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.
“They had these things they thought were mathematical laws and they’re making predictions relative to those mathematical laws and the systems are not meeting them,” says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally “hit a wall”—something he’s warned could happen since 2022. “I didn’t know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck,” he says.
Have we run out of data?
A slowdown could be a reflection of the limits of current deep learning techniques, or simply that “there’s not enough fresh data anymore,” Marcus says. It’s a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text data’s limitations. “I think it’s like that with language models,” she says.
The lack of data is particularly acute in certain domains like reasoning and mathematics, where we “just don’t have that much high quality data,” says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesn’t mean scaling is likely to stop—just that scaling alone might be insufficient. “At every order of magnitude scale up, different innovations have to be found,” he says, noting that it does not mean AI progress will slow overall.
Read more: Is AI About to Run Out of Data? The History of Oil Says No
It’s not the first time critics have pronounced scaling dead. “At every stage of scaling, there are always arguments,” Amodei said last week. “The latest one we have today is, ‘we’re going to run out of data, or the data isn’t high quality enough or models can’t reason.,” “…I’ve seen the story happen for enough times to really believe that probably the scaling is going to continue,” he said. Reflecting on OpenAI’s early days on Y-Combinator’s podcast, company CEO Sam Altman partially credited the company’s success with a “religious level of belief” in scaling—a concept he says was considered “heretical” at the time. In response to a recent post on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying “there is no wall.”
Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. “They expected AI was going to be able to, already write a PhD thesis,” he says. “Maybe it feels a bit.. anti-climactic.”
A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. “We tend to forget that GPT three from GPT four was like 100x scale in compute,” Sevilla says. “If you want to do something like 100 times bigger than GPT-4, you’re gonna need up to a million GPUs,” Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musk’s 100,000 GPU supercomputer in Memphis—the largest of its kind—which was reportedly built from start to finish in three months.
In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAI’s o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. “This is something we already knew was possible,” Sevilla says, gesturing to an Epoch AI report published in July 2023.
Read more: Elon Musk’s New AI Data Center Raises Alarms Over Pollution
Policy and geopolitical implications
Prematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4’s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. It’s an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.
Much of the U.S.’s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Biden’s sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit China’s access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing China’s AI progress.
“The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology,” says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. “stalls at the frontier,” he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.’s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.
Whether we’re seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. “It’s unclear to me that a few months is a substantial enough reference point,” Singer says. “You could hit a plateau and then hit extremely rapid gains.”
Landmark Bill to Ban Children From Social Media Introduced in Australia’s Parliament
MELBOURNE — Australia’s communications minister introduced a world-first law into Parliament on Thursday that would ban children under 16 from social media, saying online safety was one of parents’ toughest challenges.
Michelle Rowland said TikTok, Facebook, Snapchat, Reddit, X and Instagram were among the platforms that would face fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent young children from holding accounts.
“This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia,” Rowland told Parliament.
[time-brightcove not-tgx=”true”]“There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite,” she added.
X owner Elon Musk warned that Australia intended to go further, posting on his platform: “Seems like a backdoor way to control access to the Internet by all Australians.”
The bill has wide political support. After it becomes law, the platforms would have one year to work out how to implement the age restriction.
“For too many young Australians, social media can be harmful,” Rowland said. “Almost two-thirds of 14- to 17-years-old Australians have viewed extremely harmful content online including drug abuse, suicide or self-harm as well as violent material. One quarter have been exposed to content promoting unsafe eating habits.”
Government research found that 95% of Australian care-givers find online safety to be one of their “toughest parenting challenges,” she said. Social media had a social responsibility and could do better in addressing harms on their platforms, she added.
“This is about protecting young people, not punishing or isolating them, and letting parents know that we’re in their corner when it comes to supporting their children’s health and wellbeing,” Rowland said.
Read More: Teens Are Stuck on Their Screens. Here’s How to Protect Them
Child welfare and internet experts have raised concerns about the ban, including isolating 14- and 15-year-olds from their already established online social networks.
Rowland said there would not be age restrictions placed on messaging services, online games or platforms that substantially support the health and education of users.
“We are not saying risks don’t exist on messaging apps or online gaming. While users can still be exposed to harmful content by other users, they do not face the same algorithmic curation of content and psychological manipulation to encourage near-endless engagement,” she said.
The government announced last week that a consortium led by British company Age Check Certification Scheme has been contracted to examine various technologies to estimate and verify ages.
In addition to removing children under 16 from social media, Australia is also looking for ways to prevent children under 18 from accessing online pornography, a government statement said.
Age Check Certification Scheme’s chief executive Tony Allen said Monday the technologies being considered included age estimation and age inference. Inference involves establishing a series of facts about individuals that point to them being at least a certain age.
Rowland said the platforms would also face fines of up to AU$50 million ($33 million) if they misused personal information of users gained for age-assurance purposes.
Information used for age assurances must be destroyed after serving that purpose unless the user consents to it being kept, she said.
Digital Industry Group Inc., an advocate for the digital industry in Australia, said with Parliament expected to vote on the bill next week, there might not be time for “meaningful consultation on the details of the globally unprecedented legislation.”
“Mainstream digital platforms have strict measures in place to keep young people safe, and a ban could push young people on to darker, less safe online spaces that don’t have safety guardrails,” DIGI managing director Sunita Bose said in a statement. “A blunt ban doesn’t encourage companies to continually improve safety because the focus is on keeping teenagers off the service, rather than keeping them safe when they’re on it.”
Tech, Media & Telecom Roundup: Market Talk
U.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security Concerns
“AI is a technology like no other in human history,” U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. “Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isn’t the smart thing to do.”
Raimondo’s remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.
[time-brightcove not-tgx=”true”]Raimondo suggested participants keep two principles in mind: “We can’t release models that are going to endanger people,” she said. “Second, let’s make sure AI is serving people, not the other way around.”
Read More: How Commerce Secretary Gina Raimondo Became America’s Point Woman on AI
The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.
In a joint statement, the members of the International Network of AI Safety Institutes—which includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore—laid out their mission: “to be a forum that brings together technical expertise from around the world,” “…to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community,” and “…to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.”
In the lead-up to the convening, the U.S. AISI, which serves as the network’s inaugural chair, also announced a new government taskforce focused on the technology’s national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to “identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology,” with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.
The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Communist Party does not get to “write the rules of the road.” Earlier Wednesday, Chinese lab Deepseek announced a new “reasoning” model thought to be the first to rival OpenAI’s own reasoning model, o1, which the company says is “designed to spend more time thinking” before it responds.
On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a “Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” which the commission defined as “systems as good as or better than human capabilities across all cognitive domains” that “would surpass the sharpest human minds at every task.”
Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesday’s event, Anthropic CEO Dario Amodei—who believes AGI-like systems could arrive as soon as 2026—cited “loss of control” risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting “we also need to be really careful about how we do it.”
Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI model—the upgraded version of Anthropic’s Claude 3.5 Sonnet. The evaluation focused on assessing the model’s biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be “routinely circumvented,” which they noted is “consistent with prior research on the vulnerability of other AI systems’ safeguards.”
The San Francisco convening set out three priority topics that stand to “urgently benefit from international collaboration”: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.
While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, “to accelerate the design and implementation of frontier AI safety frameworks.” And in February, France will host its “AI Action Summit,” following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.
Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. “It has the potential to replace the human mind,” she said. “Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation. We need that virtuous cycle.”
Apple Offers $100 Million Investment in Indonesia to Lift iPhone 16 Ban
U.S. Antitrust Regulators Seek to Break Up Google, Force Sale of Chrome Browser
U.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.
The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Google’s industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.
[time-brightcove not-tgx=”true”]A sale of Chrome “will permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” Justice Department lawyers argued in their filing.
Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.
The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Biden’s administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.
The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Google’s punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.
If Mehta embraces the government’s recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.
Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apple’s iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.
Regulators also want Google to license the search index data it collects from people’s queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results.
Kent Walker, Google’s chief legal officer, lashed out at the Justice Department for pursuing “a radical interventionist agenda that would harm Americans and America’s global technology.” In a blog post, Walker warned the “overly broad proposal” would threaten personal privacy while undermining Google’s early leadership in artificial intelligence, “perhaps the most important innovation of our time.”
Wary of Google’s increasing use of artificial intelligence in its search results, regulators also advised Mehta to ensure websites will be able to shield their content from Google’s AI training techniques.
The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.
“The playing field is not level because of Google’s conduct, and Google’s quality reflects the ill-gotten gains of an advantage illegally acquired,” the Justice Department asserted in its recommendations. “The remedy must close this gap and deprive Google of these advantages.”
It’s still possible that the Justice Department could ease off attempts to break up Google, especially if Trump takes the widely expected step of replacing Assistant Attorney General Jonathan Kanter, who was appointed by Biden to oversee the agency’s antitrust division.
Read More: How a Second Trump Administration Will Change the Domestic and World Order
Although the case targeting Google was originally filed during the final months of Trump’s first term in office, Kanter oversaw the high-profile trial that culminated in Mehta’s ruling against Google. Working in tandem with Federal Trade Commission Chair Lina Khan, Kanter took a get-tough stance against Big Tech that triggered other attempted crackdowns on industry powerhouses such as Apple and discouraged many business deals from getting done during the past four years.
Trump recently expressed concerns that a breakup might destroy Google but didn’t elaborate on alternative penalties he might have in mind. “What you can do without breaking it up is make sure it’s more fair,” Trump said last month. Matt Gaetz, the former Republican congressman that Trump nominated to be the next U.S. Attorney General, has previously called for the breakup of Big Tech companies.
Gaetz faces a tough confirmation hearing.
Read More: Here Are the New Members of Trump’s Administration So Far
This latest filing gave Kanter and his team a final chance to spell out measures that they believe are needed to restore competition in search. It comes six weeks after Justice first floated the idea of a breakup in a preliminary outline of potential penalties.
But Kanter’s proposal is already raising questions about whether regulators seek to impose controls that extend beyond the issues covered in last year’s trial, and—by extension—Mehta’s ruling.
Banning the default search deals that Google now pays more than $26 billion annually to maintain was one of the main practices that troubled Mehta in his ruling.
It’s less clear whether the judge will embrace the Justice Department’s contention that Chrome needs to be spun out of Google and or Android should be completely walled off from its search engine.
“It is probably going a little beyond,” Syracuse University law professor Shubha Ghosh said of the Chrome breakup. “The remedies should match the harm, it should match the transgression. This does seem a little beyond that pale.”
Google rival DuckDuckGo, whose executives testified during last year’s trial, asserted the Justice Department is simply doing what needs to be done to rein in a brazen monopolist.
“Undoing Google’s overlapping and widespread illegal conduct over more than a decade requires more than contract restrictions: it requires a range of remedies to create enduring competition,” Kamyl Bazbaz, DuckDuckGo’s senior vice president of public affairs, said in a statement.
Trying to break up Google harks back to a similar punishment initially imposed on Microsoft a quarter century ago following another major antitrust trial that culminated in a federal judge deciding the software maker had illegally used his Windows operating system for PCs to stifle competition.
However, an appeals court overturned an order that would have broken up Microsoft, a precedent many experts believe will make Mehta reluctant to go down a similar road with the Google case.