Reading view

There are new articles available, click to refresh the page.

One company appears to be thriving as part of NASA’s return to the Moon

The second Intuitive Machines lander is prepared for hot-fire testing this week.

Enlarge / The second Intuitive Machines lander is prepared for hot-fire testing this week. (credit: Intuitive Machines)

One of the miracles of the Apollo Moon landings is that they were televised, live, for all the world to see. This transparency diffused doubts about whether the lunar landings really happened and were watched by billions of people.

However, as remarkable a technical achievement as it was to broadcast from the Moon in 1969, the video was grainy and black and white. As NASA contemplates a return to the Moon as part of the Artemis program, it wants much higher resolution video and communications with its astronauts on the lunar surface.

To that end, NASA announced this week that it had awarded a contract to Houston-based Intuitive Machines for "lunar relay services." Essentially this means Intuitive Machines will be responsible for building a small constellation of satellites around the Moon that will beam data back to Earth from the lunar surface.

Read 12 remaining paragraphs | Comments

How AI Could Transform Fast Fashion for the Better—and Worse

Shein-AI-fast-fashion

Since Shein became the world’s most popular online shopping destination—with seemingly unbeatable prices, and influencers posting “haul” videos to show off their purchases on social media—the Chinese fast-fashion giant has raised questions over how it produces its plethora of merchandise at dizzying speeds. The answer: AI-powered algorithms that allow the company to pick up changes in customer demand and interest, allowing it to adjust its supply chain in real time. As a result, Shein reportedly lists as many as 600,000 items on its online platform at any given moment, selling to customers in over 220 countries and regions globally.

[time-brightcove not-tgx=”true”]

But the company has also long been under scrutiny for its poor record on environmental sustainability, becoming fashion’s biggest polluter in 2023. Investigations into Shein’s supply chains have found severe labor rights violations, with factory workers in Southern Chinese manufacturing plants reporting grueling 75-hour work weeks to keep up with demand. 

Shein claims AI is the answer to solving these problems, too. During a retail conference in Berlin in January, Peter Pernot-Day, Shein’s head of global strategy and corporate affairs, explained that more than 5,000 Shein suppliers recently gained access to an AI software platform to analyze customer preferences—information that the company then uses to produce small batches of merchandise to match supply in real-time. “We are using machine-learning technologies to accurately predict demand in a way we think is cutting-edge,” Peter Pernot-Day said. “The net effect of this is reducing inventory waste.”

Shein isn’t the only fast-fashion company to tout the benefits of AI in transforming the fast fashion business. Many of its competitors, including H&M and Zara, have also turned to machine-learning technology to analyze sales data and understand customer demand by predicting trends, tracking inventory levels, and cutting down on operational costs. Retail experts are equally optimistic about the power of generative AI: a recent report by McKinsey suggests that AI could add up to $275 billion to the operating profits of apparel, fashion, and luxury sectors in the next three to five years.

“We are already seeing significant shifts in fast fashion with the use of GenAI,” says Holger Harreis, a senior partner at McKinsey who co-authored the report. Harries adds that in the long term, this could result in more personalized processes in fashion, “even to levels of quasi-bespoke tailoring with colors, styles, and sizes—all delivered by a heavily genAI-led process with human interventions focused on where humans add the most value.”

As Shein uses AI to optimize its supply chain, however, environmental experts question whether these claimed efficiencies are truly improving outcomes. “Without strong ethical, social, and environmental standards in place, AI could just as easily be driving faster production and overconsumption,” says Lewis Perkins, the president of the Apparel Impact Institute, a global nonprofit that measures the fashion industry’s climate impact. 

Companies promise waste reduction as consumption soars

As the world’s second-largest industrial polluter, fast fashion releases 1.2 billion tonnes of carbon emissions every year, accounting for 10% of global emissions, according to research from the European Environment Agency. But no company has been as prolific in generating emissions in recent years as Shein. The company’s 2023 sustainability report recorded a carbon footprint of 16.7 million tonnes last year—nearly triple the number of emissions it produced in the previous three years. Shein’s record has also soared past Zara, previously fashion’s biggest emitter, and is roughly double that of companies like Nike, H&M, and LVMH. 

Founded by Chinese billion Sky Xu in 2008, Shein became a go-to destination for online shopping during the pandemic after listing nearly 600,000 items on its marketplace. By November 2022, it was accounting for 50% of fast-fashion sales in the U.S. One in four Gen Z consumers now shop at Shein, while 44% make at least one Shein purchase monthly, according to research from EMARKETER. Shein reports that 61% of its carbon footprint came from its supply chain, while 38% came from transporting goods from its facilities to customers. In July alone, Shein sent about 900,000 packages to customers by air. 

The sustainability report also highlighted how the company plans to reduce emissions. That includes moving production hubs closer to the customers, launching a $222 million circularity fund to promote textile-to-textile recycling, and setting a 25% reduction target for emissions by 2030. While Shein did not respond to TIME’s request for comment, a spokesperson for the company recently told Grist that the company is increasing inventory in U.S. warehouses and using cargo ships to deliver to customers. The company also reiterated that AI would further help to reduce waste, asserting that “we do not see growth as antithetical to sustainability.”

Read More: Shein Is the World’s Most Popular Fashion Brand—at a Huge Cost to Us All

There is new research that could back these claims. A study by the UNSW Institute for Climate Risk & Response found that companies can harness AI-driven technologies for climate action to analyze their carbon footprint, as well as devise strategies to make these improvements. 

“In short, AI will improve the firms’ entire value chain in ways that help them avoid, mitigate, or offset the environmental impacts of their products, services, or processes,” says David Grant, who co-authored this study with colleague Shahriar Akter. Grant adds that much of this work can be done far more quickly and more accurately with AI, as opposed to humans. “The benefits to the environment, specifically in respect of climate change, are thus far greater than would otherwise be achieved,” he says. 

But still, the authors of the study warn against the risks posed by AI in the fast fashion supply chain, specifically through a “vicious circle of overconsumption, pollution, and exploitation,” says Akter, pointing to Shein’s ability to predict demand and manufacture garments at “lightning-fast speed,” which puts added strain on factory workers to churn out garments even faster.

Algorithms feed on copyrighted work

Generative AI’s risks don’t stop at the supply chain. Akter from UNSW adds that the technology is also susceptible to breaching copyrights and compromising the artistic quality of human creativity. 

In April, Connecticut-based artist and designer Alan Giana filed a lawsuit in New York’s Southern District against Shein, alleging that the company’s use of AI, machine learning and algorithms were systematically infringing on his copyrighted work. Citing “Coastal Escape,” artwork that appeared on Shein’s website without permission or attribution, the complaint alleges that “widespread copyright infringement is baked into the business” by using sophisticated electronic systems that “algorithmically scour the internet for popular works by artists.” It went further by stating that the infringement likely extends to “thousands or tens of thousands of other persons” in the U.S. 

Shein has been faced with dozens of similar lawsuits alleging design theft in the past. In July 2023, three graphic designers in China sued Shein for using “secretive algorithms” to identify trends and copy their designs. The complaint went so far as to say that the company’s copyright infringement was so aggressive that it amounted to “racketeering.” In response, Shein told NBC that it took all claims of infringement “seriously:” “We take swift action when complaints are raised by valid IP rights holders,” it stated. 

Akter from UNSW says that generative AI-based designs “might result in breaching copyrights and put a company in a questionable situation,” adding that it could also result in “algorithmic monoculture,” pushing fashion companies to rely on similar algorithms and causing them to lose the necessary creativity in fashion retailing. Moreover, he says that AI-based marketing models could also result in algorithmic bias extending to race, gender, sexual orientation, social class, religion, and ethnicity. 

But despite these risks, more brands are investing significant amounts of their budget in AI. McKinsey’s Harreis is optimistic about its ability to optimize production and reduce waste, but he adds that companies still face a big challenge. “In order for tech to add value, companies need to realize that it is never just about tech, it takes rewiring the entire organization,” he says. 

AI can help bring a systemic shift in design, production, and consumption, says Perkins at the Apparel Impact Institute, but only if it is “paired with responsible business practices, transparent supply chains, and a commitment to reducing overall impact.” It’s not impossible to imagine what this might look like. Perkins points to innovators like Made2Flow, which uses AI-driven data analytics to measure and optimize environmental impact across the fashion supply chain. Similarly, Smartex.Ai leverages AI to detect and reduce fabric defects, leading to lower material waste.

But if AI is used solely to speed up production and push more products to market, it could “fuel overconsumption,” Perkins warns. “Until there’s clear evidence that AI is being used to genuinely reduce the fashion industry’s environmental footprint, I remain cautious about how much positive impact this model is actually having,” he says.

6 Questions About the Deadly Exploding Pager Attacks in Lebanon, Answered

Lebanon's Health Ministry calls for blood donations after exploding pagers

When thousands of pagers and other wireless devices simultaneously exploded across Lebanon and parts of Syria this week, killing at least 15 people and injuring thousands more, it exposed what one Hezbollah official described as the “biggest security breach” the Iran-backed militant group has experienced in nearly a year of war with Israel. In a period replete with violent attacks across the region—from Israel’s bombardment of the Gaza Strip to the targeted assassinations of militant leaders in Iran and Lebanon—this was perhaps the most sophisticated and daring one yet.

[time-brightcove not-tgx=”true”]

Hezbollah confirmed that eight of its fighters were killed in the blasts taking place on Tuesday, according to the BBC. Further such explosions, this time involving two-way radios, were reported on Wednesday. Civilians haven’t been spared from the onslaught. At least two children were killed in Tuesday’s blasts, according to the country’s health minister, and thousands of others were wounded by them, some critically. Iran’s ambassador to Lebanon lost an eye as a result of one of the blasts, according to the New York Times.

Officials in the U.S. and elsewhere have left little doubt as to who might be responsible. Hezbollah and Lebanese officials quickly pointed to Israel, which in addition to waging its ongoing war with Hamas in Gaza has also been exchanging near daily blows with Hezbollah across its northern border with Lebanon since Oct. 7. Earlier this week, the Israeli government announced it was expanding its war aims to include the return of its northern residents who were evacuated from towns along the country’s northern frontier in the immediate aftermath of Oct. 7—a goal that the country’s defense minister Yoav Gallant said would be achieved through “military action.” Days earlier, Lebanese residents on the other side of the border received Israeli military leaflets ordering them to leave the area. The Israeli military has since confirmed that their distribution as an “unauthorized action,” and said that no evacuation is underway.

Still, expert observers warn that this attack, and any retaliation that might follow it, could raise the prospect of a wider war breaking out. Here are six of the biggest questions—and answers—that remain.

[video id=JtPYHsq0]

How were the explosions triggered?

Hezbollah’s widespread use of pagers—hardly considered a high-tech form of communication by most standards—was primarily a security precaution. The militant group had reportedly ordered its members to forego using mobile phones earlier this year due to concerns that they could be more easily tracked. In their place they were given AR-924 pagers, thousands of which were sourced from a Taiwan-based brand called Gold Apollo. Although the company confirmed it had licensed the use of its brand for these pagers, it declined playing any role in their manufacturing, which it said was done by the Budapest-based firm called BAC Consulting.

Footage from one of the blasts—which TIME was unable to independently verify, but which was deemed credible by the BBC—showed the moment one of these pagers exploded, emitting smoke and causing the person who appeared to be carrying it to fall to the floor.

Experts who spoke with TIME say that this wasn’t a cyberattack. Rather, it was likely the result of an infiltration in the supply chain, which makes how the pagers were manufactured and who was involved all the more critical. “The explosions were likely triggered by pre-implanted explosives, possibly activated via a radio signal, as simple as the paging system itself,” says Lukasz Olejnik, an independent researcher and consultant in cybersecurity and privacy. “The supply chain was likely compromised at some point, either in the factory or during delivery.”

While such an operation would have been difficult to execute, it isn’t beyond the capabilities of a country like Israel. “Israel is obviously still the master of intelligence in the region,” Andreas Krieg, an associate professor for security studies at King’s College London, tells TIME, noting that “it has a network of intelligence and information collection that is unparalleled.”

TOPSHOT-LEBANON-ISRAEL-PALESTINIAN-CONFLICT

What is Israel saying about it?

Israel has a long history of pulling off complex attacks of the kind seen in Lebanon. But as with the recent assassination of Hamas leader Ismail Haniyeh in Iran, it rarely takes responsibility for them. When TIME inquired about Israel’s involvement in the pager explosions, an Israeli military spokesperson declined to confirm or deny whether the country was behind the attack, offering only a two-word response: “no comment.”

But experts say that all obvious signs point to Israeli involvement. “No one else is benefiting from it, but Israel, in terms of paralyzing Hezbollah,” says Krieg, noting that the militant group has been the most strategic threat to Israel for at least the past three decades. “There are loads of people who don’t like Hezbollah in the region, including Arab countries,” he adds, “but none of them have the capability to actually do something as sophisticated as this.”

Why now?

There could be any number of reasons for why Israel would opt to launch this attack now. One theory, attributed to senior intelligence sources and reported by Al Monitor, was that the compromised status of the pagers was at risk of being imminently discovered. Another is that Israel perhaps hoped the attack would act as a deterrent following recent revelations that the country’s security service foiled an attempt by Hezbollah to assassinate a former senior Israeli security official using a remotely detonated explosive device.

There’s also the possibility that Israel, having made moving its displaced population back to their homes in the north of Israel among its war aims, wanted to pressure Hezbollah into moving its forces away from the nearby Israel-Lebanon border.

While some observers fear that the attack could have been initiated as a prelude to a wider Israeli military incursion in Lebanon, Krieg says such an escalation would be in neither party’s interests, recent comments from the Israeli defense minister notwithstanding. “This paralysis of [Hezbollah] being unable to communicate effectively with one another is certainly something that could be a preparation, a first step, of such an operation,” he says. “But I don’t think that’s likely.”

Will Hezbollah retaliate?

Hezbollah pledged on Wednesday that it will continue its military operations against Israel in order to “support Gaza,” and warned that Israel will face a “difficult reckoning” as a result of the pager attack, which it called a “massacre.” The armed group’s leader, Hassan Nasrallah, is expected to deliver a speech addressing the attack on Thursday.

How are governments around the world reacting?

A State Department spokesperson, who declined to comment on suspicions that the attacks were carried out by Israel, confirmed that the U.S. had no prior knowledge of the attack, telling reporters on Tuesday that Washington was neither aware nor involved with the operation.

“That’s probably true because I think the [Biden] administration would try and talk them out of it, because they would say it’s escalatory,” Michael Allen, the former National Security Council director for President George W. Bush, tells TIME.

Across the Atlantic, the E.U.’s foreign policy chief Josep Borrel condemned the attacks in a statement, warning that they “endanger the security and stability of Lebanon, and increase the risk of escalation in the region.” He notably did not mention Israel in the statement, opting instead to urge all stakeholders to “avert an all-out war.”

The Iranian government, which backs and sponsors Hezbollah, condemned the attack as a “terrorist act.”

Does this attack constitute a war crime?

While the attack may have targeted pagers used by Hezbollah, that doesn’t necessarily mean that those in possession of them were armed militants. “Hezbollah is obviously the fighting wing, but Hezbollah is [also] a political party, it’s a charity organization, it’s a civil societal movement as well,” says Krieg. “And so this pager system would have been distributed among civilians as well—people who are not fighters, who are not contributing to the war effort, and they were targeted as well.”

It’s precisely for this reason that the use of booby traps are prohibited under international law. “The use of an explosive device whose exact location could not be reliably known would be unlawfully indiscriminate, using a means of attack that could not be directed at a specific military target and as a result would strike military targets and civilians without distinction,” Lama Fakih, the Beirut-based Middle East and North Africa director at Human Rights Watch, said in a statement. “

“Simultaneous targeting of thousands of individuals, whether civilians or members of armed groups, without knowledge as to who was in possession of the targeted devices, their location and their surroundings at the time of the attack, violates international human rights law and, to the extent applicable, international humanitarian law,” Volker Turk, the U.N.’s High Commissioner for Human Rights, said in a statement on Wednesday, adding that those who ordered and carried out the attacks “must be held to account.”

Why Sam Altman Is Leaving OpenAI’s Safety Committee

US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Elon Musk’s New AI Data Center Raises Alarms Over Pollution

Elon Musk's xAI to Develop New Supercomputer in Memphis

In July, Elon Musk made a bold prediction: that his artificial intelligence startup xAI would release “the most powerful AI in the world,” a model called Grok 3, by this December. The bulk of that AI’s training, Musk said, would happen at a “massive new training center” in Memphis, which he bragged had been built in 19 days.

[time-brightcove not-tgx=”true”]

But many residents of Memphis were taken by surprise, including city council members who said they were given no input about the project or its potential impacts on the city. Data centers like this one use a vast amount of electricity and water. And in the months since, an outcry has grown among community members and environmental groups, who warn of the plant’s potential negative impact on air quality, water access, and grid stability, especially for nearby neighborhoods that have suffered from industrial pollution for decades. These activists also contend that the company is illegally operating gas turbines.

“This continues a legacy of billion-dollar conglomerates who think that they can do whatever they want to do, and the community is just not to be considered,” KeShaun Pearson, executive director of the nonprofit Memphis Community Against Pollution, tells TIME. “They treat southwest Memphis as just a corporate watering hole where they can get water at cheaper price and a place to dump all their residue without any real oversight or governance.” 

Some local leaders and utility companies, conversely, contend that xAI will be a boon for local infrastructure, employment, and grid modernization. Given the massive scale of this project, xAI’s foray into Memphis will serve as a litmus test of whether the AI-fueled data center boom might actually improve American infrastructure—or harm the disadvantaged just like so many power-hungry industries of decades past.

Artificial intelligence company xAI

“The largest data center on the planet”

In order for AI models to become smarter and more capable, they must be trained on vast amounts of data. Much of this training now happens in massive data centers around the world, which burn through electricity often accessed directly from public power sources. A recent report from Morgan Stanley estimates that data centers will emit three times more carbon dioxide by the end of the decade than if generative AI had not been developed. 

Read More: How AI Is Fueling a Boom in Data Centers and Energy Demand

The first version of Grok launched last year, and Musk has said he hopes it will be an “anti-woke” competitor to ChatGPT. (In practice, for example, this means it is able to generate controversial images that other AI models will not, including Nazi Mickey Mouse.) In recent interviews, Musk has stressed the importance of Grok ingesting as much data as possible to catch up with his competitors. So xAI built its data center, called Colossus, in Southwest Memphis, near Boxtown, a historically Black community, to do a bulk of the training. Ebby Amir, a technologist at xAI, boasted that the new site was “the largest AI datacenter on the planet.”

Local leaders said the plant would offer “good-paying jobs” and “significant additional revenues” for the local utility company. Memphis Mayor Paul Young praised the project in a statement, saying that the new xAI training center would reside on an “ideal site, ripe for investment.” 

But other local officials and community members soon became frustrated with the project’s lack of details. The Greater Memphis Chamber and Memphis, Gas, Light, and Water Division (MLGW) signed a non-disclosure agreement with xAI, citing privacy of economic development. Some Memphis council members heard about the project on the news. “It’s been pretty astounding the lack of transparency and the pace at which this project has proceeded,” Amanda Garcia, a senior attorney at the Southern Environmental Law Center, says. “We learn something new every week.”

For instance, there’s a major divide between how much electricity xAI wants to use, and how much MLGW can provide. In August, the utility company said that xAI would have access to 50 megawatts of power. But xAI wants to use triple that amount—which, for comparison, is enough energy to power 80,000 households. 

MLGW said in a statement to TIME that xAI is paying for the technical upgrades that enable them to double their power usage—and that in order for the company to reach the full 150 megawatts, there will need to be $1.7 million in improvements to a transmission line. “There will be no impact to the reliability of availability of power to other customers from this electric load,” the company wrote. They also added that xAI would be required to reduce its electricity consumption during times of peak demand, and that any infrastructure improvement costs would not be borne by taxpayers.

In response to complaints about the lack of communication with council members, MLGW wrote: “xAI’s request does not require approvals from the MLGW Board of Commissioners or City Council.” 

But community members worry whether Memphis’s utilities can handle such a large consumer of energy. In the past, the city’s power grid has been forced into rolling blackouts by ice storms and other severe weather events.

And Garcia, at the SELC, says that while xAI waits for more power to become available, they’ve turned to non-legal measures to sate their demand, by installing gas combustion turbines on the site that they are operating without a permit. Garcia says the SELC has observed the installation of 18 such turbines, which have the capacity to emit 130 tons of harmful nitrogen oxides per year. The SELC and community groups sent a letter to the Shelby County Health Department demanding their removal—but the health department responded by claiming the turbines were out of their authority, and referred them to the EPA. The EPA told NPR that it was “looking into the matter.” A representative for xAI did not immediately respond to a request for comment.

Much of Memphis is already smothered by harmful pollution. The American Lung Association currently gives Shelby County, which contains Memphis, an “F” grade for its smog levels, writing, “the air you breathe may put your health at risk.” A local TV report this year named Boxtown the most polluted neighborhood in Memphis, especially during the summer.

Boxtown and its surrounding neighborhoods have historically suffered from poverty and pollution. Southwest Memphis’s cancer rate is four times the national average, according to a 2013 study, and life expectancy in at least one South Memphis neighborhood is 10 years lower than other parts of the city, a 2020 study found. The Tennessee Valley Authority has been dumping contaminated coal ash in a nearby landfill. And a Sterilization Services of Tennessee facility was finally closed last year after emitting ethylene oxide into the air for decades, which the EPA linked to increased cancer risk in South Memphis.

A representative for the Greater Memphis Chamber, which worked to bring xAI to Memphis, wrote to TIME in response to a request for comment: “We will not be participating in your narrative.”

City of Memphis struggles with lead pipes and water company doing patrial replacements.

Potential impact on water

Environmentalists are also concerned about the facility’s use of water. “Industries are attracted to us because we have some of the purest water in the world, and it is dirt cheap to access,” says Sarah Houston, the executive director of the local environmental group of the nonprofit Protect Our Aquifer.

Data centers use water to cool their computers and stop them from overheating. So far xAI has drawn 30,000 gallons from the Memphis Sand Aquifer, the region’s drinking water supply, every day since beginning its initial operations, according to MLGW—who added that the company’s water usage would have “no impact on the availability of water to other customers.”

But Houston and other environmentalists are concerned especially because Memphis’s aging water infrastructure is more than a century old and has failed several winters in a row, leading to boil advisories and pleas to residents to conserve water usage during times of stress. “xAI is just an additional industrial user pumping this 2,000 year old pure water for a non-drinking purpose,” Houston says. “When you’re cooling supercomputers, it doesn’t seem to warrant this super pure ancient water that we will never see again.”

Memphis’s drinking water has also been threatened by contamination. In 2022, the Environmental Integrity Project and Earthjustice claimed that a now-defunct coal plant in Memphis was leaking arsenic and other dangerous chemicals into the groundwater supply, and ranked it as one of the 10 worst contaminated coal ash sites in the country. And because xAI sits close to the contaminated well in question, Houston warns that its heavy water usage could exacerbate the problem. “The more you pump, the faster contaminants get pulled down towards the water supply,” she says.

MLGW contends that xAI’s use of Memphis’s drinking water is temporary, because xAI is assisting in the “ the design and proposed construction” of a graywater facility that will treat wastewater so that it can be used to cool data centers machines. MLGW is also trying to get Musk to provide a Tesla Megapack, a utility-scale battery, as part of the development.

Houston says that these solutions will be beneficial to the city—if they come to fruition. “We fully support xAI coming to the table and being a part of this solution,” she says. “But right now, it’s been empty promises.”

“We’re not opposed to ethical economic development and business moving into town,” says Garcia. “But we need some assurance that it’s not going to make what is already an untenable situation worse.” 

Disproportionate harm

For Pearson, of Memphis Community Against Pollution, the arrival of xAI is concerning because as someone who grew up in Boxtown, he says he’s seen how other major corporations have treated the area. Over the years, Memphis has dangled tax breaks and subsidies to persuade industrial companies to set up shop nearby. But many of those projects have not led to lasting economic development, and have seemingly contributed to an array of health problems of nearby residents.

For instance, city, county and state officials lured the Swedish home appliance manufacturer Electrolux to Memphis in 2013 with $188 million in subsidies. The company’s president told NPR that it intended to provide good jobs and stay there long-term. Six years later, the company announced it would shut down their facility to consolidate resources for another location, laying off over 500 employees, in a move that blindsided even Mayor Jim Strickland. Now, xAI has taken over that Electrolux plant, which spans 750,000 square feet.

“Companies choose Memphis because they believe it is the path of least resistance: They come here, build factories, pollute the air, and move on,” Pearson says.

Pearson says that community organizations of southwest Memphis have had no contact or dialogue with xAI and its plans in the area whatsoever; that there’s been no recruiting in the community related to jobs, or any training related to workplace development. When presented with claims that xAI will economically benefit the local community, he harbors many doubts. 

“This is the same playbook, and the same talking points passed down and passed around by these corporate colonialists,” Pearson says. “For us, it is empty, it’s callous, and it’s just disingenuous to continue to regurgitate these things without actually having plans of implementation or inclusion.”

Instagram Introduces Teen Accounts, Other Sweeping Changes to Boost Child Safety Online

Instagram Teen Accounts

Instagram is introducing separate teen accounts for those under 18 as it tries to make the platform safer for children amid a growing backlash against how social media affects young people’s lives.

Beginning Tuesday in the U.S., U.K., Canada and Australia, anyone under under 18 who signs up for Instagram will be placed into a teen account and those with existing accounts will be migrated over the next 60 days. Teens in the European Union will see their accounts adjusted later this year.

[time-brightcove not-tgx=”true”]

Meta acknowledges that teenagers may lie about their age and says it will require them to verify their ages in more instances, like if they try to create a new account with an adult birthday. The Menlo Park, California company also said it is building technology that proactively finds teen accounts that pretend to be grownups and automatically places them into the restricted teen accounts.

Read More: The U.S. Surgeon General Fears Social Media Is Harming the ‘Well-Being of Our Children’

The teen accounts will be private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. “Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 p.m. until 7 a.m.

While these settings will be turned on for all teens, 16 and 17-year-olds will be able to turn them off. Kids under 16 will need their parents’ permission to do so.

“The three concerns we’re hearing from parents are that their teens are seeing content that they don’t want to see or that they’re getting contacted by people they don’t want to be contacted by or that they’re spending too much on the app,” said Naomi Gleit, head of product at Meta. “So teen accounts is really focused on addressing those three concerns.”

The announcement comes as the company faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In the past, Meta’s efforts at addressing teen safety and mental health on its platforms have been met with criticism that the changes don’t go far enough. For instance, while kids will get a notification when they’ve spent 60 minutes on the app, they will be able to bypass it and continue scrolling.

That’s unless the child’s parents turn on “parental supervision” mode, where parents can limit teens’ time on Instagram to a specific amount of time, such as 15 minutes.

With the latest changes, Meta is giving parents more options to oversee their kids’ accounts. Those under 16 will need a parent or guardian’s permission to change their settings to less restrictive ones. They can do this by setting up “parental supervision” on their accounts and connecting them to a parent or guardian.

Nick Clegg, Meta’s president of global affairs, said last week that parents don’t use the parental controls the company has introduced in recent years.

Gleit said she thinks teen accounts will create a “big incentive for parents and teens to set up parental supervision.”

“Parents will be able to see, via the family center, who is messaging their teen and hopefully have a conversation with their teen,” she said. “If there is bullying or harassment happening, parents will have visibility into who their teen’s following, who’s following their teen, who their teen has messaged in the past seven days and hopefully have some of these conversations and help them navigate these really difficult situations online.”

U.S. Surgeon General Vivek Murthy said last year that tech companies put too much on parents when it comes to keeping children safe on social media.

“We’re asking parents to manage a technology that’s rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,” Murthy said in May 2023.

Meta Is Globally Banning Russian State Media on Its Apps, Citing ‘Foreign Interference’

Social media company Meta—the parent company of Facebook, Instagram, and WhatsApp—announced Monday that it will ban RT and other Russian state media from its apps worldwide, days after the State Department announced sanctions against Kremlin-coordinated news organizations.

“After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a statement provided to TIME.

[time-brightcove not-tgx=”true”]

Before the ban, RT had over 7 million followers on Facebook, while its Instagram account had over a million followers.

The move is an escalation of actions Meta announced in 2022, after Russia’s invasion of Ukraine, to limit the spread of Russian disinformation, which at the time included labeling and demoting posts with links to Russian state-controlled media outlets and demonetizing the accounts of those outlets and prohibiting them from running ads. The company also complied with E.U. and U.K. government requests to restrict access to RT and Sputnik in those territories. In response, in March 2022, Russia blocked access to Facebook and Instagram in the country. 

Meta’s latest actions come after Secretary of State Antony Blinken said in a press conference on Friday that the U.S. government has concluded Rossiya Segodnya and five of its subsidiaries, including RT, “are no longer merely firehoses of Russian Government propaganda and disinformation; they are engaged in covert influence activities aimed at undermining American elections and democracies, functioning like a de facto arm of Russia’s intelligence apparatus.” Sanctions unveiled Friday were imposed on RT’s parent company TV-Novosti as well as on Rossiya Segodnya and its general director Dmitry Kiselyov, and the State Department issued a notice “alerting the world to RT’s covert global activities.” Russian President Vladimir Putin’s spokesperson, Dmitry Peskov, told the Associated Press that the State Department’s allegations were “nonsense.”

Meta’s new global ban follows a similar YouTube global ban on Russian state-funded media channels, while TikTok and X (formerly Twitter) block access to RT and Sputnik in the E.U. and U.K.

At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

Inventor and futurist Ray Kurzweil, researcher and Brookings Institution fellow Chinasa T. Okolo, director of the U.S. Artificial Safety Institute (AISI) Elizabeth Kelly, and Cognizant CEO Ravi Kumar S, discussed the transformative power of AI during a panel at a TIME100 Impact Dinner in San Francisco on Monday. During the discussion, which was moderated by TIME’s editor-in-chief Sam Jacobs, Kurzweil predicted that we will achieve Artificial General Intelligence (AGI), a type of AI that might be smarter than humans, by 2029.

[time-brightcove not-tgx=”true”]

“Nobody really took it seriously until now,” Kurzweil said about AI. “People are convinced it’s going to either endow us with things we’d never had before, or it’s going to kill us.”

Cognizant sponsored Monday’s event, which celebrated the 100 most influential people leading change in AI. The TIME100 AI spotlights computer scientists, business leaders, policymakers, advocates, and others at the forefront of big changes in the industry. Jacobs probed the four panelists—three of whom were named to the 2024 list—about the opportunities and challenges presented by AI’s rapid advancement.

Kumar discussed the potential economic impact of generative AI and cited a new report from Cognizant which says that generative AI could add more than a trillion dollars annually to the US economy by 2032. He identified key constraints holding back widespread adoption, including the need for improved accuracy, cost-performance, responsible AI practices, and explainable outputs. “If you don’t get productivity,” he said, “task automation is not going to lead to a business case stacking up behind it.”

Okolo highlighted the growth of AI initiatives in Africa and the Global South, citing the work of professor Vukosi Marivate from the University of Pretoria in South Africa, who has inspired a new generation of researchers within and outside the continent. However, Okolo acknowledged the mixed progress in improving the diversity of languages informing AI models, with grassroots communities in Africa leading the charge despite limited support and funding.

Kurzweil said that he was excited about the potential of simulated biology to revolutionize drug discovery and development. By simulating billions of interactions in a matter of days, he noted, researchers can accelerate the process of finding treatments for diseases like cancer and Alzheimer’s. He also provided a long-term perspective on the exponential growth of computational power, predicting a sharper so-called S-curve (a slow start, then rapid growth before leveling off) for AI disruption compared to previous technological revolutions.

Read more: The TIME100 Most Influential People in AI 2024

Kelly addressed concerns about AI’s potential for content manipulation in the context of the 2024 elections and beyond. “It’s going to matter this year, but it’s going to matter every year more and more as we move forward,” she noted. She added that AISI is working to advance the science to detect synthetically created content and authenticate genuine information.

Kelly also noted that lawmakers have been focusing on AI’s risks and benefits for some time, with initiatives like the AI Bill of Rights and the AI Risk Management Framework. “The president likes to use the phrase ‘promise and peril,’ which I think pretty well captures it, because we are incredibly excited about stimulant biology and drug discovery and development while being aware of the flip side risks,” she said.

As the panel drew to a close, Okolo urged attendees, which included nearly 50 other past and present TIME100 AI honorees, to think critically about how they develop and apply AI and to try to ensure that it reaches people in underrepresented regions in a positive way. 

“A lot of times you talk about the benefits that AI has brought, you know, to people. And a lot of these people are honestly concentrated in one region of the world,” she said. “We really have to look back, or maybe, like, step back and think broader,” she implored, asking leaders in the industry to think about people from Africa to South America to South Asia and Southeast Asia. “How can they benefit from these technologies, without necessarily exploiting them in the process?”

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

At TIME100 Impact Dinner, AI Leaders Talk Reshaping the Future of AI

TIME hosted its inaugural TIME100 Impact Dinner: Leaders Shaping the Future of AI, in San Francisco on Monday evening. The event kicked off a weeklong celebration of the TIME100 AI, a list that recognizes the 100 most influential individuals in artificial intelligence across industries and geographies and showcases the technology’s rapid evolution and far-reaching impact. 

TIME CEO Jessica Sibley set the tone for the evening, highlighting the diversity and dynamism of the 2024 TIME100 AI list. With 91 newcomers from last year’s inaugural list and honorees ranging from 15 to 77 years old, the group reflects the field’s explosive growth and its ability to attract talent from all walks of life.

[time-brightcove not-tgx=”true”]

Read More: At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

The heart of the evening centered around three powerful toasts delivered by distinguished AI leaders, each offering a unique perspective on the transformative potential of AI and the responsibilities that come with it.

Reimagining power structures

Amba Kak, co-executive director of the AI Now Institute, delivered a toast that challenged attendees to look beyond the technical aspects of AI and consider its broader societal implications. Kak emphasized the “mirror to the world” quality of AI, reflecting existing power structures and norms through data and design choices.

“The question of ‘what kind of AI we want’ is really an opening to revisit the more fundamental question of ‘what is the kind of world we want, and how can AI get us there?’” Kak said. She highlighted the importance of democratizing AI decision-making, ensuring that those affected by AI systems have a say in their deployment.

Kak said she drew inspiration from frontline workers and advocates pushing back against the misuse of AI, including nurses’ unions staking their claim in clinical AI deployment and artists defending human creativity. Her toast served as a rallying cry for a more inclusive and equitable AI future.

[video id=hiE0IRej]

Amplifying creativity and breaking barriers

Comedian, filmmaker, and AI storyteller King Willonius emphasized AI’s role in lowering the bar for who can be creative and giving voice to underrepresented communities. Willonius shared his personal journey of discovery with AI-assisted music composition, illustrating how AI can unlock new realms of creative expression.

“AI doesn’t just automate—it amplifies,” he said. “It breaks down barriers, giving voices to those who were too often left unheard.” He highlighted the work of his company, Blerd Factory, in leveraging AI to empower creators from diverse backgrounds.

Willonius’ toast struck a balance between enthusiasm for AI’s creative potential and a call for responsible development. He emphasized the need to guide AI technology in ways that unite rather than divide, envisioning a future where AI fosters empathy and global connection.

[video id=78d6ibMo]

Accelerating scientific progress

AMD CEO Lisa Su delivered a toast that underscored AI’s potential to address major global challenges. Su likened the current AI revolution to the dawn of the industrial era or the birth of the internet, emphasizing the unprecedented pace of innovation in the field.

She painted a picture of AI’s transformative potential across various domains, from materials science to climate change research, and said that she was inspired by AI’s applications in healthcare, envisioning a future where AI accelerates disease identification, drug development, and personalized medicine.

“I can see the day when we accelerate our ability to identify diseases, develop therapeutics, and ultimately find cures for the most important illnesses in the world,” Su said. Her toast was a call to action for leaders to seize the moment and work collaboratively to realize AI’s full potential while adhering to principles of transparency, fairness, and inclusion.

[video id=Wau2OTyu]

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

Uncertainty Is Uncomfortable, and Technology Makes It Worse. That Doesn’t Have to Be a Bad Thing

On July 19, 2024, a single-digit error in the software update of cybersecurity company CrowdStrike grounded international airlines, halted emergency medical treatments, and paralyzed global commerce. The expansive network that had enabled CrowdStrike to access information from over a trillion events every day and prevent more than 75,000 security breaches every year, had ironically introduced a new form of uncertainty of colossal significance. The impact of a seemingly minor error in the code was now at risk of being exponentially magnified by the network, unleashing the kind of global havoc we witnessed last July.

[time-brightcove not-tgx=”true”]

The very mechanism that had reduced the uncertainty of regular cyber threats had concurrently increased the unpredictability of a rare global catastrophe—and with it, the deepening cracks in our relationship with uncertainty and technology.

Our deep-seated discomfort with uncertainty—a discomfort rooted not in just in technology but in our very biology—was vividly demonstrated in a 2017 experiment where London-based researchers gave consenting volunteers painful electric shocks to the hand while measuring physiological markers of distress. Knowing there was only 50-50 chance of receiving the shock agitated the volunteers far more than knowing the painful shock was imminent, highlighting how much more unsettling uncertainty can be compared to the certainty of discomfort.

This drive to eliminate uncertainty has long been a catalyst for technological progress and turned the wheels of innovation. From using fire to dispel the fear of darkness to mechanizing agriculture to guarantee food abundance, humanity’s innovations have consistently aimed to turn uncertainty into something controllable and predictable on a global scale.

Read More: Here’s Why Uncertainty Makes You So Miserable

But much like energy, uncertainty can be transformed but never destroyed. When we think we have removed it, we have merely shifted it to a different plane. This gives rise to the possibility of an intriguing paradox: With each technological advancement designed to reduce uncertainty, do we inadvertently introduce new uncertainties, making the world even more unpredictable?

 Automated algorithms have revolutionized financial trading at an astronomical scale by shattering human limits on speed, precision and accuracy. Yet, in the process of eliminating human error and decoding complex probabilities in foreign exchange trading, these systems have introduced new uncertainties of their own—uncertainties too intricate for human comprehension. What once plagued day-to-day trading with human-scale uncertainty has morphed into technology-scale risks that didn’t exist before. By lowering some forms of uncertainty, these automated algorithms have ultimately increased it. 

A striking example of this is algorithmic trading, where software is used to eradicate uncertainty and upgrade financial systems. It is, however, impossible to test sophisticated permutation of every pathway in a software decision tree, meaning that even the most sophisticated upgrades inevitably introduce new uncertainties. Subtle errors, camouflaged in labyrinthine webs of code, become imperceptible at the lightning speed of execution. In August 2012, when the NYSE’s Retail Liquidity Program went live, global financial services firm Knight Capital was equipped with a high-frequency trading algorithm. Unfortunately, an overnight glitch in the code amplified the error to a disastrous degree, costing Knight Capital $440 million in just 30 minutes.

As technology becomes more sophisticated, it not only eradicates the uncertainty of time and distance from our everyday lives but also transforms how we experience uncertainty itself. An app informs you exactly when the bus you are waiting for will arrive, a check mark tells you when your friend has not only received but read your message, and a ding lets you know someone is waiting on your doorstep when you are on vacation on a different continent. This information is often incredibly useful. Yet, the same technology floods us with unsolicited, irrelevant details. Worse, it often captures our attention by delivering fragments of incomplete information: a partial news headline pops up on our phone, an alert from our home security system reports unusual activity on our property, a new friend request slides into our social media inbox. Resolving these uncertainties requires us to swipe, click, or watch, only to be bombarded with yet another stream of incomplete information. Instead of resolving uncertainty, the information often leaves us with more of it.

Rarely do we stop to ask ourselves if the kinds of frequent, small-scale uncertainties that modern technology is designed to eliminate are really so terrible in the first place. If we did, we might realize that human-scale uncertainties make us more resilient, revealing weaknesses we did not know we had.

Historical evidence suggests that eliminating uncertainty isn’t always beneficial. Angkor, the medieval capital of the ancient Khmer empire, became the largest pre-industrial city in the world partly because its population was able to tame the uncertainty of nature through creating an elaborate water management network. This system eliminated the unpredictability of monsoon rains, sustaining Angkor’s agrarian population, which grew to nearly a million. Yet this very system may also have contributed to the city’s collapse. When Angkor was struck by severe droughts and violent monsoons in the 14th and 15th centuries, their reliance on guaranteed water supplies left its people vulnerable to disaster.

The uncertainty paradox does not stem from innovation in itself. Innovating solutions for large scale uncertainties has manifestly saved countless lives. Modern day examples include Sanitation technology that has helped to eradicate cholera in many parts of the world and Tuned Mass Damper (TMD) technology that protected the Taipei 101 skyscraper during a 7.4 magnitude earthquake in 2024. Instead, the uncertainty paradox seems to emerge when we seek to erase smaller scale,  everyday uncertainties entirely from our lives. This can make us more vulnerable, as we forget how to deal with unexpected uncertainty when it finally strikes. One solution is to deliberately create opportunities to experience and rehearse dealing with uncertainty. Hong Kong’s resilience in the face of intense typhoons stems from regular exposure to monsoon rains—preparing the city to withstand storms that could devastate other parts of the world.

Netflix engineers Yury Izrailevsky and Ariel Tseitlin captured this idea in their creation of “Chaos Monkey,” a tool that deliberately introduces system failures so engineers can identify weaknesses and build better recovery mechanisms. Inspired by this concept, many organizations now conduct “uncertainty drills” to prepare for unexpected challenges. However, while drills prepare us for the known scenarios, true resilience requires training our reactions to uncertainty itself—not just our responses to specific situations. Athletes and Navy SEALS incorporate deliberate worst-case scenarios in their training to build mental fortitude and adaptability in the face of the unknown.  

The relationship between uncertainty and technology is like an Ouroboros: we create technology to eliminate uncertainty, yet that technology generates new uncertainties that we must eliminate all over again. Rather than trying to break this cycle, the solution may be paradoxical: to make the world feel more certain, we might need to embrace a little more uncertainty every day.

Regulating AI Is Easier Than You Think

Female engineer inspecting wafer chip in laboratory

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

[time-brightcove not-tgx=”true”]

In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world’s electricity, and only nine countries possess nuclear weapons.

Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world’s most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.

The most advanced AI systems are trained on tens of thousands of highly specialized computer chips. These chips are housed in massive data centers where they churn on data for months to train the most capable AI models. These advanced chips are difficult to produce, the supply chain is tightly controlled, and large numbers of them are needed to train AI models. 

Governments can establish a regulatory regime where only authorized computing providers are able to acquire large numbers of advanced chips in their data centers, and only licensed, trusted AI companies are able to access the computing power needed to train the most capable—and most dangerous—AI models. 

This may seem like a tall order. But only a handful of nations are needed to put this governance regime in place. The specialized computer chips used to train the most advanced AI models are only made in Taiwan. They depend on critical technology from three countries—Japan, the Netherlands, and the U.S. In some cases, a single company holds a monopoly on key elements of the chip production supply chain. The Dutch company ASML is the world’s only producer of extreme ultraviolet lithography machines that are used to make the most cutting-edge chips.

Read More: The 100 Most Influential People in AI 2024

Governments are already taking steps to govern these high-tech chips. The U.S., Japan, and the Netherlands have placed export controls on their chip-making equipment, restricting their sale to China. And the U.S. government has prohibited the sale of the most advanced chips—which are made using U.S. technology—to China. The U.S. government has also proposed requirements for cloud computing providers to know who their foreign customers are and report when a foreign customer is training a large AI model that could be used for cyberattacks. And the U.S. government has begun debating—but not yet put in place—restrictions on the most powerful trained AI models and how widely they can be shared. While some of these restrictions are about geopolitical competition with China, the same tools can be used to govern chips to prevent adversary nations, terrorists, or criminals from using the most powerful AI systems.

The U.S. can work with other nations to build on this foundation to put in place a structure to govern computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, training AI models, and the trained models that are the result of this production cycle. 

Japan, the Netherlands, and the U.S. can help lead the creation of a global governance framework that permits these highly specialized chips to only be sold to countries that have established regulatory regimes for governing computing hardware. This would include tracking chips and keeping account of them, knowing who is using them, and ensuring that AI training and deployment is safe and secure.

But global governance of computing hardware can do more than simply keep AI out of the hands of bad actors—it can empower innovators around the world by bridging the divide between computing haves and have nots. Because the computing requirements to train the most advanced AI models are so intense, the industry is moving toward an oligopoly. That kind of concentration of power is not good for society or for business.

Some AI companies have in turn begun publicly releasing their models. This is great for scientific innovation, and it helps level the playing field with Big Tech. But once the AI model is open source, it can be modified by anyone. Guardrails can be quickly stripped away.

The U.S. government has fortunately begun piloting national cloud computing resources as a public good for academics, small businesses, and startups. Powerful AI models could be made accessible through the national cloud, allowing trusted researchers and companies to use them without releasing the models on the internet to everyone, where they could be abused.  

Countries could even come together to build an international resource for global scientific cooperation on AI. Today, 23 nations participate in CERN, the international physics laboratory that operates the world’s most advanced particle accelerator. Nations should do the same for AI, creating a global computing resource for scientists to collaborate on AI safety, empowering scientists around the world.

AI’s potential is enormous. But to unlock AI’s benefits, society will also have to manage its risks. By controlling the physical inputs to AI, nations can securely govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.

As the Electric Vehicle Industry Grows Globally, Beijing Wants Chinese EV Tech to Stay at Home

Robotic arms on the assembly line at the Zhejiang Leapmotor Technology Co. production facility in Jinhua, Zhejiang province, China, on June 23, 2024.

China has strongly advised its carmakers to make sure advanced electric vehicle technology stays in the country, people familiar with the matter said, even as they build factories around the world to escape punitive tariffs on Chinese exports.

[time-brightcove not-tgx=”true”]

Beijing is encouraging Chinese automakers to export so-called knock-down kits to their foreign plants, the people said, meaning key parts of a vehicle would be produced domestically and then sent for final assembly in their destination market. 

The instructions come as companies from BYD Co. to Chery Automobile Co. firm up plans to build factories in Spain to Thailand and Hungary as their innovative and affordable EVs make inroads in foreign markets.

Read More: Why Biden Is Taking a Hard Line on Chinese EVs

China’s Ministry of Commerce held a meeting in July with more than a dozen automakers, who were also told they shouldn’t make any auto-related investments in India, the people said asking not to be identified discussing matters that are private, in another attempt to safeguard the know-how of China’s EV industry and mitigate regulatory risks.

In addition, carmakers wanting to invest in Turkey should first notify the Ministry of Industry and Information Technology, which oversees China’s EV industry, and the local Chinese embassy in Turkey.

Representatives from the Ministry of Commerce, or MOFCOM, didn’t respond to a request for comment.

China’s directive comes at a time most major Chinese carmakers are looking to localize manufacturing so as to avoid tariffs on Chinese-made EVs. MOFCOM guidelines that demand key production should remain within China could hurt automakers’ efforts to globalize as they search for new customers to offset fierce competition and sluggish sales at home that are cutting into their bottom lines.

It could also come as a blow to those European nations wooing Chinese carmakers in the hopes their presence will bring jobs and a local economic boost. BYD is planning on building a factory in Turkey, for example, that’s expected to have an annual capacity of 150,000 cars and employ up to 5,000 people.

During the meeting, MOFCOM noted that the countries inviting Chinese automakers to build factories are usually those enacting or considering trade barriers against Chinese vehicles. Officials told attendees that manufacturers shouldn’t blindly follow trends or believe such calls for investment from foreign governments, according to the people.

Several Chinese companies have already begun opening plants in the European Union to avoid duties. But Valdis Dombrovskis, an executive vice president of the European Commission, warned recently that such moves would only work if the firms meet rules-of-origin requirements that dictate a minimum level of value must be created in the EU.

“How much of the value added is going to be created in the EU, how much of the know-how is going to be in the EU? Is it just an assembly plant or a car manufacturing plant? It’s quite a substantial difference,” Dombrovskis told the Financial Times last month.

Brazil, Spain

In Brazil, BYD and Great Wall Motor Co. have said explicitly they aim to increase the share of locally produced and locally sourced components in coming years. That’s aimed at meeting local component requirements of roughly 50% of a product in order to export to other Latin American countries without tariffs, based on Brazil’s trade agreements with them.

Turkish politicians said in July that BYD has agreed to construct a $1 billion plant in the west of the country. Any new factory is expected to improve BYD’s access to the European Union, because Turkey has a customs-union agreement with the bloc. Turkey in June introduced a 40% tariff on vehicle imports from China.

BYD declined to comment.

In Spain, Chery Automobile has a partnership with a local firm to reopen a former Nissan Motor Co. plant in Barcelona. The Spanish plant will assemble cars from kits that have been partially “knocked down,” according to Chery.

Tensions between China and India meanwhile have remained elevated since a deadly clash broke out over a stretch of border in the Himalayas between the two nuclear-armed neighbors in 2020. 

Chinese state-owned manufacturer SAIC Motor Corp., which controlled MG Motor India, was investigated over financial irregularities in 2022, Bloomberg reported. Last year, SAIC diluted its stake in the Indian MG operation, with its ownership forecast to be trimmed to 38-40% over time, according to one local media report.

Chinese EV stocks pared early gains Thursday with SAIC Motor falling more than 1% in Shanghai and Geely Automobile Holdings Ltd. and BYD slightly down in Hong Kong.

There’s Another Important Message in Taylor Swift’s Harris Endorsement

Opening Night of Taylor Swift | The Eras Tour

Minutes after the presidential debate ended on Tuesday, Taylor Swift mobilized her enormous fanbase in support of Kamala Harris by endorsing her in an Instagram post that quickly garnered 8 million likes. Swift’s decision wasn’t altogether surprising, given that she supported Joe Biden in the 2020 election and recently offered hints, in true Taylor fashion, that she was headed in this direction.

[time-brightcove not-tgx=”true”]

But what was especially notable in her Instagram post was that it spent as much time praising Kamala Harris as it did warning the public about the dangers of AI.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation,” Swift wrote. “It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” 

Swift was referring to a post from Trump in August on Truth Social, his social media site, which appeared to show the superstar and her fans endorsing him. He captioned the photo with: “I accept.” But the images looked glossy and had strange visual details, because they were created with AI.

Many viewers of the images were able to immediately identify them as fabricated. And following Swift’s post, it appears that her response refuting the images had a greater impact than the AI images themselves. But the incident could be a harbinger of plenty of AI-driven conflict in elections for years to come. 

“We are already in a bit of a crisis where a lot of American voters don’t trust elections,” says Craig Holman, a government affairs lobbyist at the nonprofit Public Citizen. “If we’re going to have this type of campaign going on all around us, feeding us information that doesn’t exist, trying to influence our votes based on that—the entire integrity of elections is very much at risk.”

Deepfakes proliferate around celebrities, elections

During the 2020 presidential election, AI tools were still largely rudimentary. In the time since, the capabilities of these tools have improved at an astounding clip. Users around the world can now use AI to create realistic images, images, and audio. Fake social media profiles that spread propaganda can be created cheaply; political parties can use AI to quickly send personalized messages to thousands of potential voters; and fake event photography and even voicemails that sound like celebrities can be put together easily. 

[video id=ZNHPu9Wg autostart="viewable"]

Some of these tools have been used in political influence campaigns. Last year, the RNC released an AI-generated video depicting a future dystopia if Joe Biden were to be re-elected. Elon Musk shared an AI image photo of Kamala Harris in Soviet-style garb, writing on X that she wants to be a “communist dictator from day one.” A fake video of a Chicago mayoral candidate making inflammatory comments about police shootings was released on the eve of that election in February and watched thousands of times on X before it was taken down. And during the Indian election this year, deepfakes were deployed en masse to create misleading videos of Bollywood celebrities and ads with Hindu supremacist language. 

Read More: As India Votes, Modi’s Party Misleads Online

Taylor Swift has been the frequent subject of many AI efforts, given her massive celebrity. Early this year, AI-generated pornographic and sometimes violent images of her were widely circulated on social media. The images helped spur legislation in the U.S. aimed at protecting deepfake victims, including the DEFIANCE Act, which allows deepfake victims to sue people who create, share or receive them, and passed the Senate in July. AI companies also scrambled to respond: Microsoft said that it was “continuing to investigate these images” and added that it had “strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.”

And Swift’s involvement is part of a growing backlash against AI from some of the world’s most prominent cultural figures. Beyonce recently spoke out against AI misinformation in a GQ interview, saying: “We have access to so much information – some facts, and some complete bullshit disguised as truth…Just recently, I heard an AI song that sounded so much like me it scared me. It’s impossible to truly know what’s real and what’s not.” Meanwhile, earlier this year, Scarlett Johansson blasted OpenAI for releasing a chatbot voice seemingly modeled upon hers.

How Trump’s deepfake move ultimately backfired

Trump has had a long-standing fascination with Swift, including calling her “fantastic” in 2012 and “unusually beautiful” in 2023. In February, Trump took credit for some of Swift’s success, posting on Truth Social that if she were to endorse Joe Biden, it would be “disloyal to the man who made her so much money.” 

But when Trump decided to post the deepfakes on Truth Social in August, his attempt at collecting Swifties appeared to have backfired. The post allowed Swift to frame her endorsement of Harris as a moral obligation; as if she had no other choice but to respond to misinformation. It also sucked up all the oxygen that Trump hoped to gain on debate night: by Wednesday morning, “Taylor Swift endorsement” was the second trending topic on Google, trailing only “who won the debate.” 

In her early years of fame, Swift refrained from speaking about politics, telling TIME in 2012 that she didn’t believe she knew “enough yet in life to be telling people who to vote for.” Over the last six years, she’s waded into politics sparingly, but with purpose, always giving strong justifications for her statements. In 2020, for example, she accused Trump of “stoking the fires of white supremacy and racism your entire presidency.” This year, Swift remained silent on politics until last night’s endorsement, garnering criticism from many people who urged her to use her unrivaled platform to make a difference. 

Read More: Watch Tim Walz React to Endorsement From ‘Fellow Cat Owner’ Taylor Swift

It’s unclear what impact these efforts have had on voters: many researchers argue that voters are more discerning than people fear, and that the potential influence of AI misinformation on elections is overblown. 

[video id=tOI0PQNs]

However, Holman, at Public Citizen, says that those studies relied upon outdated AI tools. He points to a deepfakes database created by researchers at Northwestern earlier this year, which has documented hundreds of political deepfakes, many of which have resulted in real-world harms, the researchers found. 

“We’re in a whole new era right now,” Holman says. “Technology has become so convincing, so persuasive, and so indistinguishable from reality, that I am quite convinced it’s going to have a much more serious ramifications on future election cycles.”  

What Should Be the AI Industry’s Top Focus? 5 Leaders Weigh in on the Next Year

Ahead of Dreamforce 2024, taking place Sept. 17-19, five event speakers and leaders of the artificial intelligence industry share their thoughts on the most important priorities for the near future.

Edward Norton, Co-Founder and Chief Strategy Officer of Zeck

From a high level, we need something akin to the medical Hippocratic oath, which governs doctors to do no harm. It’s for others to decide whether that’s regulation or something else, but we need a framing commitment. 

[time-brightcove not-tgx=”true”]

I often come at things from a narrative place, and I’ve always been struck by writer Isaac Asimov’s Robot series, in which he weaves meditations around how societal principles and protections are included in the laws of robotics on an almost engineered basis. Similarly, we need someone to assert a foundational principle for all of us that AI shouldn’t do harm.

On balance, at the phase we’re in right now, I see far more benefits than any actual realized negatives. I think what’s going on in medicine alone should give people a lot of enthusiasm for the positive potential in AI. That’s the field in which I’ve seen things I think are truly astonishing, and are going to lead to real revolutions in human health and quality of life for a lot of people. 

Even just AI in radiology: the capacity of AI and machine learning to just do a much, much better job than human interpretation of cancer screening. And instead of turning to treatments that have low efficacy because we’re throwing a dart at the wall, we’re starting to see the capacity of AI to create bespoke, curated, data-driven conclusions about what will benefit an individual person vs. a population. 

The diagnostic potential in AI, or the interface between diagnosis and treatments that will have efficacy, combined with genetics—it just really starts to get into a world that, to me, is really positive.

But we need an ethical baseplate to do no harm. How that gets actually structured and expressed, both on an engineered, technological level and a societal, governing level, is going to be one of the really big questions and challenges of the next few decades.


Jack Hidary, CEO of Sandbox AQ

For the past 20 months, generative AI and large language models (LLMs) have dominated the mindshare of leaders and driven countless innovations. However, C-suite execs and AI experts need to start looking beyond the capabilities—and limitations—of LLMs and explore the larger, more profound impact that large quantitative models (LQMs) will have on their organization and industry.

While LLMs are centered on our digital world—creating ­content or deriving insights from textual or visual data—LQMs drive impact on the physical world and the financial-­services sector. LQMs leverage physics-based first principles to generate new products in sectors such as biopharma, chemicals, energy, automotive, and aerospace. They can also analyze large volumes of complex numerical data to optimize investment portfolios and manage risk exposure for financial companies. 

With LQMs, breakthroughs that were seemingly impossible 24 months ago are now bearing fruit, transforming industries and pushing the boundaries of what is possible with AI. 

Enterprises are realizing they need to implement LQMs and LLMs in order to extract maximum benefits. If CEOs focus solely on LLM-powered AI solutions for customer service, marketing, document creation, digital assistants, etc., they will likely fall behind competitors who are leveraging LQMs to transform processes, create innovative new products, or solve computationally complex problems.


Cristóbal Valenzuela, Co-Founder and CEO of Runway

Over the course of the next year, our industry needs to reset the way we talk about AI to both manage expectations of what progress looks like and bring bright, creative minds with us along the way. 

This will require a collective effort to communicate our vision clearly and maintain transparency around our advancements, and it will be important to do this in a way that does not create fears or make these products out to be more than just that—products.

At Runway, we’re building significantly more advanced, accessible, and intuitive technologies and tools for our millions of creative users around the world. Our successes and future growth are driven by the strong community we’ve built through our work with artists and creatives—understanding their needs and how they approach their crafts will always be the priority.

You can see this manifested through initiatives like our annual AI Film Festival, our Gen:48 short-film competition, and our new Research and Art (RNA) community Q&A sessions.

These have all provided a platform for artists, which in turn has driven our growth and mission of empowering these artists.


Sasha Luccioni, AI and climate lead of Hugging Face

I think that we should be focusing on transparency and accountability, and communicating AI’s impacts on the planet, so that both customers and members of the community can make more informed choices. 

We don’t really have good ways of measuring the sustainability or the labor impact of AI. And what would be useful is to develop new ways of reflecting on how switching from one type of AI tool or approach to another changes the environmental impact.

For example, Google switched from good old-fashioned AI to generative AI summaries for web search. I think that’s where customers really want more information. They want to know: What do these AI summaries represent in terms of societal and planetary impacts? In my research, we found that switching from extractive AI to generative AI actually comes with 10 to 20 times more energy usage for the same request. 

We can’t opt out of new technology—and yet we don’t know how many more computers are needed; how much more energy or water is needed; how many more data centers they have to build in order for people to be able to get these AI summaries that they didn’t really ask for in the first place.

That’s where the transparency is missing because for a lot of people, they are mindful of the climate. And so I think that companies have a responsibility to their customers to say, “This is how much more energy you’re using.”


Robert Wolfe, Co-founder of Zeck

AI has the potential to transform efficiency: it gives us the opportunity to both save people time and help create audience-specific content.

I am seeing it firsthand across several companies that I’ve been lucky to work with. For example, think about a GoFundMe campaign. If AI can help you generate your narrative in a way that makes your audience more passionate about your cause, that could be monumental for someone raising money for their neighbor. 

The No. 1 angst amongst our customers at Zeck is creating infographics, charts, and graphs. Such a pain. There is not a single person in the world who likes creating charts and graphs. But Zeck AI looks at your table or data and suggests, “This may look good as a pie chart,” and creates that pie chart for you. You can choose to accept it, iterate on it, or decline it. And Zeck AI will come up with red flags as you build your narrative that you wouldn’t have thought of. Just imagine the time savings for someone who typically spends hours upon hours building everything from scratch. Now it takes minutes. Mindblowing.

I am certainly not saying that AI should replace people, but AI will definitely make everyone more efficient.

The Long Road to Genuine AI Mastery

In the early 1970s, programming computers involved punching holes in cards and feeding them to room-size machines that would produce results through a line printer, often hours or even days later. 

This is what computing had looked like for a long time, and it was against this backdrop that a team of 29 scientists and researchers at the famed Xerox PARC created the more intimate form of computing we know today: one with a display, a keyboard, and a mouse. This computer, called Alto, was so bewilderingly different that it necessitated a new term: interactive computing. 

[time-brightcove not-tgx=”true”]

Alto was viewed by some as absurdly extravagant because of its expensive components. But fast-forward 50 years, and multitrillion-dollar supply chains have sprung up to transform silica-rich sands into sophisticated, wondrous computers that live in our pockets. Interactive computing is now inextricably woven into the fabric of our lives.

Silicon Valley is again in the grip of a fervor reminiscent of the heady days of early computing. Artificial general intelligence (AGI), an umbrella term for the ability of a software system to solve any problem without specific instructions, has become a tangible revolution almost at our doorsteps.

The rapid advancements in generative AI inspire awe, and for good reason. Just as Moore’s Law charted the trajectory of personal computing and Metcalfe’s Law predicted the growth of the internet, an exponential principle underlies the development of generative AI. The scaling laws of deep learning postulate a direct correlation between the capabilities of an AI model and the scale of both the model itself and the data used to train it.

Over the past two years, the leading AI models have undergone a staggering 100-fold increase in both dimensions, with model sizes expanding from 10 billion parameters trained on 100 billion words to 1 trillion parameters trained on over 10 trillion words.

The results are evocative and useful. But the evolution of personal computing offers a salutary lesson. The trajectory from the Alto to the iPhone was a long and winding path. The development of robust operating systems, vibrant application ecosystems, and the internet itself were all crucial milestones, each of which relied on other subinventions and infrastructure: programming languages, cellular networks, data centers, and the creation of security, software, and services industries, among others. 

AI benefits from much of this infrastructure, but it’s also an important departure. For instance, large language models (LLMs) excel in language comprehension and generation, but struggle with reasoning abilities, which are crucial for tackling complex, multistep tasks. Yet solving this challenge may necessitate the creation of new neural network architectures or new approaches for training and using them, and the rate at which academia and research are generating new insights suggests we are in the early innings.

The training and serving of these models, something that we at Together AI focus on, is both a computational wonder and a quagmire. The bespoke AI supercomputers, or training clusters, created mostly by Nvidia, represent the bleeding edge of silicon design. Comprising tens of thousands of high-performance processors interconnected via advanced optical networking, these systems function as a unified supercomputer. However, their operation comes at a significant cost: they consume an order of magnitude more power and generate an equivalent amount of heat compared with traditional CPUs. The consequences are far from trivial. A recent paper published by Meta, detailing the training process of the Llama 3.1 model family on a 16,000-processor cluster, revealed a striking statistic: the system was inoperable for a staggering 69% of its operational time.

As silicon technology continues to advance in accordance with Moore’s Law, innovations will be needed to optimize chip performance while minimizing energy consumption and mitigating the attendant heat generation. By 2030, data centers may undergo a radical transformation, necessitating fundamental breakthroughs in the underlying physical infrastructure of computing.

Already, AI has emerged as a geopolitically charged domain, and its strategic significance is likely to intensify, potentially becoming a key determinant of technological preeminence in the years to come. As it improves, the transformative effects of AI on the nature of work and the labor market are also poised to become an increasingly contentious societal issue.

But a lot remains to be done, and we get to shape our future with AI. We should expect a proliferation of innovative digital products and services that will captivate and empower users in the coming years. In the long run, artificial intelligence will bloom into superintelligent systems, and these will be as inextricably woven into our lives as computing has managed to become. Human societies have absorbed new disruptive technologies over millennia and remade themselves to thrive with their aid—and artificial intelligence will be no exception.

E.U. Court Rules Against Apple in Case Over $14.4 Billion in Taxes Ireland Never Collected

The logo of Apple Inc is being shown in Hangzhou, China, on August 6, 2024.

Apple Inc. lost its court fight over a €13 billion ($14.4 billion) Irish tax bill, in a boost to the European Union’s crackdown on special deals doled out by nations to big companies.

The E.U.’s Court of Justice in Luxembourg backed a landmark 2016 decision that Ireland broke state-aid law by giving the iPhone maker an unfair advantage.

[time-brightcove not-tgx=”true”]

The court ruled on Tuesday that a lower court win for Apple should be overturned, because judges incorrectly decided that the commission’s regulators had made mistakes in their assessment.

The ruling is a boost for E.U. antitrust chief Margrethe Vestager, whose mandate in Brussels is about to end after two terms.

In 2016, Vestager sparked outrage across the Atlantic when she homed in on Apple’s tax arrangements. She claimed that Ireland granted illegal benefits to the Cupertino, California-based company that enabled it to pay substantially less tax than other businesses in the country, over many years. 

She ordered Ireland to claw back the €13 billion sum, which amounts to about two quarters of Mac sales globally. The money has been sitting in an escrow account pending a final ruling.

“We are disappointed with today’s decision as previously the general court reviewed the facts and categorically annulled this case,” an Apple spokesperson said.

At 4:16 a.m. New York time, Apple shares were down 1.3% at $218 in premarket trading on Tuesday.

While it is a negative outcome for Ireland, which had claimed it had not given any tax advantages to Apple or other tech companies to set up there, given how long the case took to reach completion, it is now unlikely to have much of an impact for the country that is a well established hub for the European headquarters of a large number of major tech companies.

Chief Executive Officer Tim Cook previously blasted the E.U. move as “total political crap.” The U.S. Treasury has also weighed in, saying that the E.U. was making itself a “supra-national tax authority” that could threaten global tax reform efforts. Then President Donald Trump said Vestager “hates the United States” because “she’s suing all our companies.”

The Apple decision was by far the biggest in Vestager’s decade-long campaign for tax fairness, which has also targeted the likes of Amazon.com Inc. and carmaker Stellantis NV’s Fiat. Vestager has argued that selective tax benefits to big firms are illegal state aid that are banned in the E.U.

At issue in Tuesday’s case were two tax deals with the Irish government in 1991 and 2007. Those agreements allowed Apple to mis-attribute Irish profits to a “head office” that “only existed on paper,” according to the E.U.’s assessment. In turn, this resulted in a massive reduction in tax bills. The E.U.’s antitrust arm argued the break Apple received was anti-competitive, amounting to illegal state aid.

The case landed at the E.U.’s top court after Vestager contested Apple’s win at a lower tribunal in 2020. Judges at the bloc’s General Court found E.U. state aid watchdogs made several errors.

Since then, the Dane has suffered several more tax defeats but she took comfort from the fact that judges backed her approach to using state-aid rules to attack unfair arrangements.

Apple was one of the first U.S. tech giants to set up in Ireland, as a result of its deliberately low corporate tax rate in the 1980s and early 1990s, designed to attract foreign investment. The company set up its European headquarters outside the southern city of Cork in 1980 and now employs around 6,000 in the country.

In the years since, many tax loopholes once available have been closed and Ireland in 2021 signed up OECD measures that include a global minimum rate of 15% for multinational corporations.

Google Loses Appeal in E.U. Antitrust Case Over Shopping Recommendations in Search Results

The Google logo is displayed in front of company headquarters in Mountain View, California, on August 13, 2024.

LONDON — Google lost its final legal challenge on Tuesday against a European Union penalty for giving its own shopping recommendations an illegal advantage over rivals in search results, ending a long-running antitrust case that came with a whopping fine.

The European Union’s Court of Justice upheld a lower court’s decision, rejecting the company’s appeal against the 2.4 billion euro ($2.7 billion) penalty from the European Commission, the 27-nation bloc’s top antitrust enforcer.

[time-brightcove not-tgx=”true”]

“By today’s judgment, the Court of Justice dismisses the appeal and thus upholds the judgment of the General Court,” the court said in a press release summarizing its decision.

Google didn’t respond immediately to a request for comment.

Read More: What Google’s Antitrust Defeat Means for AI

The commission’s original decision in 2017 accused the Silicon Valley giant of unfairly directing visitors to its own Google Shopping service to the detriment of competitors. It was one of three multibillion-euro fines that the commission imposed on Google in the previous decade as Brussels started ramping up its crackdown on the tech industry.

Google made changes to comply with the commission’s decision requiring it to treat competitors equally. The company started holding auctions for shopping search listings that it would bid for alongside other comparison shopping services.

At the same time, the company appealed the decision to the courts. But the E.U. General Court, the tribunal’s lower section, rejected its challenge in 2021 and the Court of Justice’s adviser later recommended rejecting the appeal.

European consumer group BEUC hailed the court’s decision, saying it shows how the bloc’s competition law “remains highly relevant” in digital markets.

“Google harmed millions of European consumers by ensuring that rival comparison shopping services were virtually invisible,” director general Agustín Reyna said. “Google’s illegal practices prevented consumers from accessing potentially cheaper prices and useful product information from rival comparison shopping services on all sorts of products, from clothes to washing machines.”

Google is still appealing the other two E.U. antitrust penalties, which involved its Android mobile operating system and AdSense advertising platform. The company was dealt a setback in the Android case when the E.U. General Court upheld the commission’s 4.125 billion euro fine in a 2022 decision. Its initial appeal against a 1.49 billion euro fine in the AdSense case has yet to be decided.

Those three cases foreshadowed expanded efforts by regulators worldwide to crack down on the tech industry. The E.U. has since opened more investigations into Big Tech companies and drafted new laws to clean up social media platforms and regulate artificial intelligence.

Google is now facing particular pressure over its lucrative digital advertising business. In a federal antitrust trial that began Monday, the U.S. Department of Justice is alleging the company holds a monopoly in the “ad tech” industry.

British competition regulators accused Google last week of abusing its dominance in ad tech while the E.U. is carrying out its own investigation.

Australia to Set New Age Limits for Social Media to Protect Children’s Mental Health

Close up of hand using smartphone in the dark

Australia’s government will work with states and territories to legislate new age limits for social media websites, as part of a push to protect children’s mental health and shield them from inappropriate content online.

Prime Minister Anthony Albanese will introduce the new laws before an election due within the next nine months, saying parents were working “without a map” to try and tackle the mental health consequences triggered by social media.

[time-brightcove not-tgx=”true”]

“No generation has faced this challenge before,” Albanese will say in a speech on Tuesday, according to excerpts provided by his office in advance. “Too often, there’s nothing social about social media—taking kids away from real friends and real experiences.”

No specific age limit has been set yet for social media, with the government already trialling age assurance technology to restrict children’s access to inappropriate content online, including pornography. The government is also still consulting on how the ban would work in practice.

Albanese told Australian Broadcasting Corp. on Tuesday that the government was considering an upper age limit of 14 to 16 years for the ban. 

“We’re looking at how you deliver it. This is a global issue that governments around the globe are trying to deal with,” he said. 

A June survey by Essential Media found that 68% of Australians were supportive of an age limit of social media, with only 15% opposed.

Since coming to power in May 2022, the center-left Labor government has taken several steps to try to crack down on problems associated with harmful content online. In the first half of 2024, it took social media platform X to court in an attempt to force it to remove footage of a violent terrorist attack in Sydney.

❌