Reading view

There are new articles available, click to refresh the page.

6 Questions About the Deadly Exploding Pager Attacks in Lebanon, Answered

Lebanon's Health Ministry calls for blood donations after exploding pagers

When thousands of pagers and other wireless devices simultaneously exploded across Lebanon and parts of Syria this week, killing at least 15 people and injuring thousands more, it exposed what one Hezbollah official described as the “biggest security breach” the Iran-backed militant group has experienced in nearly a year of war with Israel. In a period replete with violent attacks across the region—from Israel’s bombardment of the Gaza Strip to the targeted assassinations of militant leaders in Iran and Lebanon—this was perhaps the most sophisticated and daring one yet.

[time-brightcove not-tgx=”true”]

Hezbollah confirmed that eight of its fighters were killed in the blasts taking place on Tuesday, according to the BBC. Further such explosions, this time involving two-way radios, were reported on Wednesday. Civilians haven’t been spared from the onslaught. At least two children were killed in Tuesday’s blasts, according to the country’s health minister, and thousands of others were wounded by them, some critically. Iran’s ambassador to Lebanon lost an eye as a result of one of the blasts, according to the New York Times.

Officials in the U.S. and elsewhere have left little doubt as to who might be responsible. Hezbollah and Lebanese officials quickly pointed to Israel, which in addition to waging its ongoing war with Hamas in Gaza has also been exchanging near daily blows with Hezbollah across its northern border with Lebanon since Oct. 7. Earlier this week, the Israeli government announced it was expanding its war aims to include the return of its northern residents who were evacuated from towns along the country’s northern frontier in the immediate aftermath of Oct. 7—a goal that the country’s defense minister Yoav Gallant said would be achieved through “military action.” Days earlier, Lebanese residents on the other side of the border received Israeli military leaflets ordering them to leave the area. The Israeli military has since confirmed that their distribution as an “unauthorized action,” and said that no evacuation is underway.

Still, expert observers warn that this attack, and any retaliation that might follow it, could raise the prospect of a wider war breaking out. Here are six of the biggest questions—and answers—that remain.

[video id=JtPYHsq0]

How were the explosions triggered?

Hezbollah’s widespread use of pagers—hardly considered a high-tech form of communication by most standards—was primarily a security precaution. The militant group had reportedly ordered its members to forego using mobile phones earlier this year due to concerns that they could be more easily tracked. In their place they were given AR-924 pagers, thousands of which were sourced from a Taiwan-based brand called Gold Apollo. Although the company confirmed it had licensed the use of its brand for these pagers, it declined playing any role in their manufacturing, which it said was done by the Budapest-based firm called BAC Consulting.

Footage from one of the blasts—which TIME was unable to independently verify, but which was deemed credible by the BBC—showed the moment one of these pagers exploded, emitting smoke and causing the person who appeared to be carrying it to fall to the floor.

Experts who spoke with TIME say that this wasn’t a cyberattack. Rather, it was likely the result of an infiltration in the supply chain, which makes how the pagers were manufactured and who was involved all the more critical. “The explosions were likely triggered by pre-implanted explosives, possibly activated via a radio signal, as simple as the paging system itself,” says Lukasz Olejnik, an independent researcher and consultant in cybersecurity and privacy. “The supply chain was likely compromised at some point, either in the factory or during delivery.”

While such an operation would have been difficult to execute, it isn’t beyond the capabilities of a country like Israel. “Israel is obviously still the master of intelligence in the region,” Andreas Krieg, an associate professor for security studies at King’s College London, tells TIME, noting that “it has a network of intelligence and information collection that is unparalleled.”

TOPSHOT-LEBANON-ISRAEL-PALESTINIAN-CONFLICT

What is Israel saying about it?

Israel has a long history of pulling off complex attacks of the kind seen in Lebanon. But as with the recent assassination of Hamas leader Ismail Haniyeh in Iran, it rarely takes responsibility for them. When TIME inquired about Israel’s involvement in the pager explosions, an Israeli military spokesperson declined to confirm or deny whether the country was behind the attack, offering only a two-word response: “no comment.”

But experts say that all obvious signs point to Israeli involvement. “No one else is benefiting from it, but Israel, in terms of paralyzing Hezbollah,” says Krieg, noting that the militant group has been the most strategic threat to Israel for at least the past three decades. “There are loads of people who don’t like Hezbollah in the region, including Arab countries,” he adds, “but none of them have the capability to actually do something as sophisticated as this.”

Why now?

There could be any number of reasons for why Israel would opt to launch this attack now. One theory, attributed to senior intelligence sources and reported by Al Monitor, was that the compromised status of the pagers was at risk of being imminently discovered. Another is that Israel perhaps hoped the attack would act as a deterrent following recent revelations that the country’s security service foiled an attempt by Hezbollah to assassinate a former senior Israeli security official using a remotely detonated explosive device.

There’s also the possibility that Israel, having made moving its displaced population back to their homes in the north of Israel among its war aims, wanted to pressure Hezbollah into moving its forces away from the nearby Israel-Lebanon border.

While some observers fear that the attack could have been initiated as a prelude to a wider Israeli military incursion in Lebanon, Krieg says such an escalation would be in neither party’s interests, recent comments from the Israeli defense minister notwithstanding. “This paralysis of [Hezbollah] being unable to communicate effectively with one another is certainly something that could be a preparation, a first step, of such an operation,” he says. “But I don’t think that’s likely.”

Will Hezbollah retaliate?

Hezbollah pledged on Wednesday that it will continue its military operations against Israel in order to “support Gaza,” and warned that Israel will face a “difficult reckoning” as a result of the pager attack, which it called a “massacre.” The armed group’s leader, Hassan Nasrallah, is expected to deliver a speech addressing the attack on Thursday.

How are governments around the world reacting?

A State Department spokesperson, who declined to comment on suspicions that the attacks were carried out by Israel, confirmed that the U.S. had no prior knowledge of the attack, telling reporters on Tuesday that Washington was neither aware nor involved with the operation.

“That’s probably true because I think the [Biden] administration would try and talk them out of it, because they would say it’s escalatory,” Michael Allen, the former National Security Council director for President George W. Bush, tells TIME.

Across the Atlantic, the E.U.’s foreign policy chief Josep Borrel condemned the attacks in a statement, warning that they “endanger the security and stability of Lebanon, and increase the risk of escalation in the region.” He notably did not mention Israel in the statement, opting instead to urge all stakeholders to “avert an all-out war.”

The Iranian government, which backs and sponsors Hezbollah, condemned the attack as a “terrorist act.”

Does this attack constitute a war crime?

While the attack may have targeted pagers used by Hezbollah, that doesn’t necessarily mean that those in possession of them were armed militants. “Hezbollah is obviously the fighting wing, but Hezbollah is [also] a political party, it’s a charity organization, it’s a civil societal movement as well,” says Krieg. “And so this pager system would have been distributed among civilians as well—people who are not fighters, who are not contributing to the war effort, and they were targeted as well.”

It’s precisely for this reason that the use of booby traps are prohibited under international law. “The use of an explosive device whose exact location could not be reliably known would be unlawfully indiscriminate, using a means of attack that could not be directed at a specific military target and as a result would strike military targets and civilians without distinction,” Lama Fakih, the Beirut-based Middle East and North Africa director at Human Rights Watch, said in a statement. “

“Simultaneous targeting of thousands of individuals, whether civilians or members of armed groups, without knowledge as to who was in possession of the targeted devices, their location and their surroundings at the time of the attack, violates international human rights law and, to the extent applicable, international humanitarian law,” Volker Turk, the U.N.’s High Commissioner for Human Rights, said in a statement on Wednesday, adding that those who ordered and carried out the attacks “must be held to account.”

Why Sam Altman Is Leaving OpenAI’s Safety Committee

US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Elon Musk’s New AI Data Center Raises Alarms Over Pollution

Elon Musk's xAI to Develop New Supercomputer in Memphis

In July, Elon Musk made a bold prediction: that his artificial intelligence startup xAI would release “the most powerful AI in the world,” a model called Grok 3, by this December. The bulk of that AI’s training, Musk said, would happen at a “massive new training center” in Memphis, which he bragged had been built in 19 days.

[time-brightcove not-tgx=”true”]

But many residents of Memphis were taken by surprise, including city council members who said they were given no input about the project or its potential impacts on the city. Data centers like this one use a vast amount of electricity and water. And in the months since, an outcry has grown among community members and environmental groups, who warn of the plant’s potential negative impact on air quality, water access, and grid stability, especially for nearby neighborhoods that have suffered from industrial pollution for decades. These activists also contend that the company is illegally operating gas turbines.

“This continues a legacy of billion-dollar conglomerates who think that they can do whatever they want to do, and the community is just not to be considered,” KeShaun Pearson, executive director of the nonprofit Memphis Community Against Pollution, tells TIME. “They treat southwest Memphis as just a corporate watering hole where they can get water at cheaper price and a place to dump all their residue without any real oversight or governance.” 

Some local leaders and utility companies, conversely, contend that xAI will be a boon for local infrastructure, employment, and grid modernization. Given the massive scale of this project, xAI’s foray into Memphis will serve as a litmus test of whether the AI-fueled data center boom might actually improve American infrastructure—or harm the disadvantaged just like so many power-hungry industries of decades past.

Artificial intelligence company xAI

“The largest data center on the planet”

In order for AI models to become smarter and more capable, they must be trained on vast amounts of data. Much of this training now happens in massive data centers around the world, which burn through electricity often accessed directly from public power sources. A recent report from Morgan Stanley estimates that data centers will emit three times more carbon dioxide by the end of the decade than if generative AI had not been developed. 

Read More: How AI Is Fueling a Boom in Data Centers and Energy Demand

The first version of Grok launched last year, and Musk has said he hopes it will be an “anti-woke” competitor to ChatGPT. (In practice, for example, this means it is able to generate controversial images that other AI models will not, including Nazi Mickey Mouse.) In recent interviews, Musk has stressed the importance of Grok ingesting as much data as possible to catch up with his competitors. So xAI built its data center, called Colossus, in Southwest Memphis, near Boxtown, a historically Black community, to do a bulk of the training. Ebby Amir, a technologist at xAI, boasted that the new site was “the largest AI datacenter on the planet.”

Local leaders said the plant would offer “good-paying jobs” and “significant additional revenues” for the local utility company. Memphis Mayor Paul Young praised the project in a statement, saying that the new xAI training center would reside on an “ideal site, ripe for investment.” 

But other local officials and community members soon became frustrated with the project’s lack of details. The Greater Memphis Chamber and Memphis, Gas, Light, and Water Division (MLGW) signed a non-disclosure agreement with xAI, citing privacy of economic development. Some Memphis council members heard about the project on the news. “It’s been pretty astounding the lack of transparency and the pace at which this project has proceeded,” Amanda Garcia, a senior attorney at the Southern Environmental Law Center, says. “We learn something new every week.”

For instance, there’s a major divide between how much electricity xAI wants to use, and how much MLGW can provide. In August, the utility company said that xAI would have access to 50 megawatts of power. But xAI wants to use triple that amount—which, for comparison, is enough energy to power 80,000 households. 

MLGW said in a statement to TIME that xAI is paying for the technical upgrades that enable them to double their power usage—and that in order for the company to reach the full 150 megawatts, there will need to be $1.7 million in improvements to a transmission line. “There will be no impact to the reliability of availability of power to other customers from this electric load,” the company wrote. They also added that xAI would be required to reduce its electricity consumption during times of peak demand, and that any infrastructure improvement costs would not be borne by taxpayers.

In response to complaints about the lack of communication with council members, MLGW wrote: “xAI’s request does not require approvals from the MLGW Board of Commissioners or City Council.” 

But community members worry whether Memphis’s utilities can handle such a large consumer of energy. In the past, the city’s power grid has been forced into rolling blackouts by ice storms and other severe weather events.

And Garcia, at the SELC, says that while xAI waits for more power to become available, they’ve turned to non-legal measures to sate their demand, by installing gas combustion turbines on the site that they are operating without a permit. Garcia says the SELC has observed the installation of 18 such turbines, which have the capacity to emit 130 tons of harmful nitrogen oxides per year. The SELC and community groups sent a letter to the Shelby County Health Department demanding their removal—but the health department responded by claiming the turbines were out of their authority, and referred them to the EPA. The EPA told NPR that it was “looking into the matter.” A representative for xAI did not immediately respond to a request for comment.

Much of Memphis is already smothered by harmful pollution. The American Lung Association currently gives Shelby County, which contains Memphis, an “F” grade for its smog levels, writing, “the air you breathe may put your health at risk.” A local TV report this year named Boxtown the most polluted neighborhood in Memphis, especially during the summer.

Boxtown and its surrounding neighborhoods have historically suffered from poverty and pollution. Southwest Memphis’s cancer rate is four times the national average, according to a 2013 study, and life expectancy in at least one South Memphis neighborhood is 10 years lower than other parts of the city, a 2020 study found. The Tennessee Valley Authority has been dumping contaminated coal ash in a nearby landfill. And a Sterilization Services of Tennessee facility was finally closed last year after emitting ethylene oxide into the air for decades, which the EPA linked to increased cancer risk in South Memphis.

A representative for the Greater Memphis Chamber, which worked to bring xAI to Memphis, wrote to TIME in response to a request for comment: “We will not be participating in your narrative.”

City of Memphis struggles with lead pipes and water company doing patrial replacements.

Potential impact on water

Environmentalists are also concerned about the facility’s use of water. “Industries are attracted to us because we have some of the purest water in the world, and it is dirt cheap to access,” says Sarah Houston, the executive director of the local environmental group of the nonprofit Protect Our Aquifer.

Data centers use water to cool their computers and stop them from overheating. So far xAI has drawn 30,000 gallons from the Memphis Sand Aquifer, the region’s drinking water supply, every day since beginning its initial operations, according to MLGW—who added that the company’s water usage would have “no impact on the availability of water to other customers.”

But Houston and other environmentalists are concerned especially because Memphis’s aging water infrastructure is more than a century old and has failed several winters in a row, leading to boil advisories and pleas to residents to conserve water usage during times of stress. “xAI is just an additional industrial user pumping this 2,000 year old pure water for a non-drinking purpose,” Houston says. “When you’re cooling supercomputers, it doesn’t seem to warrant this super pure ancient water that we will never see again.”

Memphis’s drinking water has also been threatened by contamination. In 2022, the Environmental Integrity Project and Earthjustice claimed that a now-defunct coal plant in Memphis was leaking arsenic and other dangerous chemicals into the groundwater supply, and ranked it as one of the 10 worst contaminated coal ash sites in the country. And because xAI sits close to the contaminated well in question, Houston warns that its heavy water usage could exacerbate the problem. “The more you pump, the faster contaminants get pulled down towards the water supply,” she says.

MLGW contends that xAI’s use of Memphis’s drinking water is temporary, because xAI is assisting in the “ the design and proposed construction” of a graywater facility that will treat wastewater so that it can be used to cool data centers machines. MLGW is also trying to get Musk to provide a Tesla Megapack, a utility-scale battery, as part of the development.

Houston says that these solutions will be beneficial to the city—if they come to fruition. “We fully support xAI coming to the table and being a part of this solution,” she says. “But right now, it’s been empty promises.”

“We’re not opposed to ethical economic development and business moving into town,” says Garcia. “But we need some assurance that it’s not going to make what is already an untenable situation worse.” 

Disproportionate harm

For Pearson, of Memphis Community Against Pollution, the arrival of xAI is concerning because as someone who grew up in Boxtown, he says he’s seen how other major corporations have treated the area. Over the years, Memphis has dangled tax breaks and subsidies to persuade industrial companies to set up shop nearby. But many of those projects have not led to lasting economic development, and have seemingly contributed to an array of health problems of nearby residents.

For instance, city, county and state officials lured the Swedish home appliance manufacturer Electrolux to Memphis in 2013 with $188 million in subsidies. The company’s president told NPR that it intended to provide good jobs and stay there long-term. Six years later, the company announced it would shut down their facility to consolidate resources for another location, laying off over 500 employees, in a move that blindsided even Mayor Jim Strickland. Now, xAI has taken over that Electrolux plant, which spans 750,000 square feet.

“Companies choose Memphis because they believe it is the path of least resistance: They come here, build factories, pollute the air, and move on,” Pearson says.

Pearson says that community organizations of southwest Memphis have had no contact or dialogue with xAI and its plans in the area whatsoever; that there’s been no recruiting in the community related to jobs, or any training related to workplace development. When presented with claims that xAI will economically benefit the local community, he harbors many doubts. 

“This is the same playbook, and the same talking points passed down and passed around by these corporate colonialists,” Pearson says. “For us, it is empty, it’s callous, and it’s just disingenuous to continue to regurgitate these things without actually having plans of implementation or inclusion.”

Instagram Introduces Teen Accounts, Other Sweeping Changes to Boost Child Safety Online

Instagram Teen Accounts

Instagram is introducing separate teen accounts for those under 18 as it tries to make the platform safer for children amid a growing backlash against how social media affects young people’s lives.

Beginning Tuesday in the U.S., U.K., Canada and Australia, anyone under under 18 who signs up for Instagram will be placed into a teen account and those with existing accounts will be migrated over the next 60 days. Teens in the European Union will see their accounts adjusted later this year.

[time-brightcove not-tgx=”true”]

Meta acknowledges that teenagers may lie about their age and says it will require them to verify their ages in more instances, like if they try to create a new account with an adult birthday. The Menlo Park, California company also said it is building technology that proactively finds teen accounts that pretend to be grownups and automatically places them into the restricted teen accounts.

Read More: The U.S. Surgeon General Fears Social Media Is Harming the ‘Well-Being of Our Children’

The teen accounts will be private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. “Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 p.m. until 7 a.m.

While these settings will be turned on for all teens, 16 and 17-year-olds will be able to turn them off. Kids under 16 will need their parents’ permission to do so.

“The three concerns we’re hearing from parents are that their teens are seeing content that they don’t want to see or that they’re getting contacted by people they don’t want to be contacted by or that they’re spending too much on the app,” said Naomi Gleit, head of product at Meta. “So teen accounts is really focused on addressing those three concerns.”

The announcement comes as the company faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In the past, Meta’s efforts at addressing teen safety and mental health on its platforms have been met with criticism that the changes don’t go far enough. For instance, while kids will get a notification when they’ve spent 60 minutes on the app, they will be able to bypass it and continue scrolling.

That’s unless the child’s parents turn on “parental supervision” mode, where parents can limit teens’ time on Instagram to a specific amount of time, such as 15 minutes.

With the latest changes, Meta is giving parents more options to oversee their kids’ accounts. Those under 16 will need a parent or guardian’s permission to change their settings to less restrictive ones. They can do this by setting up “parental supervision” on their accounts and connecting them to a parent or guardian.

Nick Clegg, Meta’s president of global affairs, said last week that parents don’t use the parental controls the company has introduced in recent years.

Gleit said she thinks teen accounts will create a “big incentive for parents and teens to set up parental supervision.”

“Parents will be able to see, via the family center, who is messaging their teen and hopefully have a conversation with their teen,” she said. “If there is bullying or harassment happening, parents will have visibility into who their teen’s following, who’s following their teen, who their teen has messaged in the past seven days and hopefully have some of these conversations and help them navigate these really difficult situations online.”

U.S. Surgeon General Vivek Murthy said last year that tech companies put too much on parents when it comes to keeping children safe on social media.

“We’re asking parents to manage a technology that’s rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,” Murthy said in May 2023.

Meta Is Globally Banning Russian State Media on Its Apps, Citing ‘Foreign Interference’

Social media company Meta—the parent company of Facebook, Instagram, and WhatsApp—announced Monday that it will ban RT and other Russian state media from its apps worldwide, days after the State Department announced sanctions against Kremlin-coordinated news organizations.

“After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a statement provided to TIME.

[time-brightcove not-tgx=”true”]

Before the ban, RT had over 7 million followers on Facebook, while its Instagram account had over a million followers.

The move is an escalation of actions Meta announced in 2022, after Russia’s invasion of Ukraine, to limit the spread of Russian disinformation, which at the time included labeling and demoting posts with links to Russian state-controlled media outlets and demonetizing the accounts of those outlets and prohibiting them from running ads. The company also complied with E.U. and U.K. government requests to restrict access to RT and Sputnik in those territories. In response, in March 2022, Russia blocked access to Facebook and Instagram in the country. 

Meta’s latest actions come after Secretary of State Antony Blinken said in a press conference on Friday that the U.S. government has concluded Rossiya Segodnya and five of its subsidiaries, including RT, “are no longer merely firehoses of Russian Government propaganda and disinformation; they are engaged in covert influence activities aimed at undermining American elections and democracies, functioning like a de facto arm of Russia’s intelligence apparatus.” Sanctions unveiled Friday were imposed on RT’s parent company TV-Novosti as well as on Rossiya Segodnya and its general director Dmitry Kiselyov, and the State Department issued a notice “alerting the world to RT’s covert global activities.” Russian President Vladimir Putin’s spokesperson, Dmitry Peskov, told the Associated Press that the State Department’s allegations were “nonsense.”

Meta’s new global ban follows a similar YouTube global ban on Russian state-funded media channels, while TikTok and X (formerly Twitter) block access to RT and Sputnik in the E.U. and U.K.

At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

Inventor and futurist Ray Kurzweil, researcher and Brookings Institution fellow Chinasa T. Okolo, director of the U.S. Artificial Safety Institute (AISI) Elizabeth Kelly, and Cognizant CEO Ravi Kumar S, discussed the transformative power of AI during a panel at a TIME100 Impact Dinner in San Francisco on Monday. During the discussion, which was moderated by TIME’s editor-in-chief Sam Jacobs, Kurzweil predicted that we will achieve Artificial General Intelligence (AGI), a type of AI that might be smarter than humans, by 2029.

[time-brightcove not-tgx=”true”]

“Nobody really took it seriously until now,” Kurzweil said about AI. “People are convinced it’s going to either endow us with things we’d never had before, or it’s going to kill us.”

Cognizant sponsored Monday’s event, which celebrated the 100 most influential people leading change in AI. The TIME100 AI spotlights computer scientists, business leaders, policymakers, advocates, and others at the forefront of big changes in the industry. Jacobs probed the four panelists—three of whom were named to the 2024 list—about the opportunities and challenges presented by AI’s rapid advancement.

Kumar discussed the potential economic impact of generative AI and cited a new report from Cognizant which says that generative AI could add more than a trillion dollars annually to the US economy by 2032. He identified key constraints holding back widespread adoption, including the need for improved accuracy, cost-performance, responsible AI practices, and explainable outputs. “If you don’t get productivity,” he said, “task automation is not going to lead to a business case stacking up behind it.”

Okolo highlighted the growth of AI initiatives in Africa and the Global South, citing the work of professor Vukosi Marivate from the University of Pretoria in South Africa, who has inspired a new generation of researchers within and outside the continent. However, Okolo acknowledged the mixed progress in improving the diversity of languages informing AI models, with grassroots communities in Africa leading the charge despite limited support and funding.

Kurzweil said that he was excited about the potential of simulated biology to revolutionize drug discovery and development. By simulating billions of interactions in a matter of days, he noted, researchers can accelerate the process of finding treatments for diseases like cancer and Alzheimer’s. He also provided a long-term perspective on the exponential growth of computational power, predicting a sharper so-called S-curve (a slow start, then rapid growth before leveling off) for AI disruption compared to previous technological revolutions.

Read more: The TIME100 Most Influential People in AI 2024

Kelly addressed concerns about AI’s potential for content manipulation in the context of the 2024 elections and beyond. “It’s going to matter this year, but it’s going to matter every year more and more as we move forward,” she noted. She added that AISI is working to advance the science to detect synthetically created content and authenticate genuine information.

Kelly also noted that lawmakers have been focusing on AI’s risks and benefits for some time, with initiatives like the AI Bill of Rights and the AI Risk Management Framework. “The president likes to use the phrase ‘promise and peril,’ which I think pretty well captures it, because we are incredibly excited about stimulant biology and drug discovery and development while being aware of the flip side risks,” she said.

As the panel drew to a close, Okolo urged attendees, which included nearly 50 other past and present TIME100 AI honorees, to think critically about how they develop and apply AI and to try to ensure that it reaches people in underrepresented regions in a positive way. 

“A lot of times you talk about the benefits that AI has brought, you know, to people. And a lot of these people are honestly concentrated in one region of the world,” she said. “We really have to look back, or maybe, like, step back and think broader,” she implored, asking leaders in the industry to think about people from Africa to South America to South Asia and Southeast Asia. “How can they benefit from these technologies, without necessarily exploiting them in the process?”

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

At TIME100 Impact Dinner, AI Leaders Talk Reshaping the Future of AI

TIME hosted its inaugural TIME100 Impact Dinner: Leaders Shaping the Future of AI, in San Francisco on Monday evening. The event kicked off a weeklong celebration of the TIME100 AI, a list that recognizes the 100 most influential individuals in artificial intelligence across industries and geographies and showcases the technology’s rapid evolution and far-reaching impact. 

TIME CEO Jessica Sibley set the tone for the evening, highlighting the diversity and dynamism of the 2024 TIME100 AI list. With 91 newcomers from last year’s inaugural list and honorees ranging from 15 to 77 years old, the group reflects the field’s explosive growth and its ability to attract talent from all walks of life.

[time-brightcove not-tgx=”true”]

Read More: At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

The heart of the evening centered around three powerful toasts delivered by distinguished AI leaders, each offering a unique perspective on the transformative potential of AI and the responsibilities that come with it.

Reimagining power structures

Amba Kak, co-executive director of the AI Now Institute, delivered a toast that challenged attendees to look beyond the technical aspects of AI and consider its broader societal implications. Kak emphasized the “mirror to the world” quality of AI, reflecting existing power structures and norms through data and design choices.

“The question of ‘what kind of AI we want’ is really an opening to revisit the more fundamental question of ‘what is the kind of world we want, and how can AI get us there?’” Kak said. She highlighted the importance of democratizing AI decision-making, ensuring that those affected by AI systems have a say in their deployment.

Kak said she drew inspiration from frontline workers and advocates pushing back against the misuse of AI, including nurses’ unions staking their claim in clinical AI deployment and artists defending human creativity. Her toast served as a rallying cry for a more inclusive and equitable AI future.

[video id=hiE0IRej]

Amplifying creativity and breaking barriers

Comedian, filmmaker, and AI storyteller King Willonius emphasized AI’s role in lowering the bar for who can be creative and giving voice to underrepresented communities. Willonius shared his personal journey of discovery with AI-assisted music composition, illustrating how AI can unlock new realms of creative expression.

“AI doesn’t just automate—it amplifies,” he said. “It breaks down barriers, giving voices to those who were too often left unheard.” He highlighted the work of his company, Blerd Factory, in leveraging AI to empower creators from diverse backgrounds.

Willonius’ toast struck a balance between enthusiasm for AI’s creative potential and a call for responsible development. He emphasized the need to guide AI technology in ways that unite rather than divide, envisioning a future where AI fosters empathy and global connection.

[video id=78d6ibMo]

Accelerating scientific progress

AMD CEO Lisa Su delivered a toast that underscored AI’s potential to address major global challenges. Su likened the current AI revolution to the dawn of the industrial era or the birth of the internet, emphasizing the unprecedented pace of innovation in the field.

She painted a picture of AI’s transformative potential across various domains, from materials science to climate change research, and said that she was inspired by AI’s applications in healthcare, envisioning a future where AI accelerates disease identification, drug development, and personalized medicine.

“I can see the day when we accelerate our ability to identify diseases, develop therapeutics, and ultimately find cures for the most important illnesses in the world,” Su said. Her toast was a call to action for leaders to seize the moment and work collaboratively to realize AI’s full potential while adhering to principles of transparency, fairness, and inclusion.

[video id=Wau2OTyu]

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

Uncertainty Is Uncomfortable, and Technology Makes It Worse. That Doesn’t Have to Be a Bad Thing

On July 19, 2024, a single-digit error in the software update of cybersecurity company CrowdStrike grounded international airlines, halted emergency medical treatments, and paralyzed global commerce. The expansive network that had enabled CrowdStrike to access information from over a trillion events every day and prevent more than 75,000 security breaches every year, had ironically introduced a new form of uncertainty of colossal significance. The impact of a seemingly minor error in the code was now at risk of being exponentially magnified by the network, unleashing the kind of global havoc we witnessed last July.

[time-brightcove not-tgx=”true”]

The very mechanism that had reduced the uncertainty of regular cyber threats had concurrently increased the unpredictability of a rare global catastrophe—and with it, the deepening cracks in our relationship with uncertainty and technology.

Our deep-seated discomfort with uncertainty—a discomfort rooted not in just in technology but in our very biology—was vividly demonstrated in a 2017 experiment where London-based researchers gave consenting volunteers painful electric shocks to the hand while measuring physiological markers of distress. Knowing there was only 50-50 chance of receiving the shock agitated the volunteers far more than knowing the painful shock was imminent, highlighting how much more unsettling uncertainty can be compared to the certainty of discomfort.

This drive to eliminate uncertainty has long been a catalyst for technological progress and turned the wheels of innovation. From using fire to dispel the fear of darkness to mechanizing agriculture to guarantee food abundance, humanity’s innovations have consistently aimed to turn uncertainty into something controllable and predictable on a global scale.

Read More: Here’s Why Uncertainty Makes You So Miserable

But much like energy, uncertainty can be transformed but never destroyed. When we think we have removed it, we have merely shifted it to a different plane. This gives rise to the possibility of an intriguing paradox: With each technological advancement designed to reduce uncertainty, do we inadvertently introduce new uncertainties, making the world even more unpredictable?

 Automated algorithms have revolutionized financial trading at an astronomical scale by shattering human limits on speed, precision and accuracy. Yet, in the process of eliminating human error and decoding complex probabilities in foreign exchange trading, these systems have introduced new uncertainties of their own—uncertainties too intricate for human comprehension. What once plagued day-to-day trading with human-scale uncertainty has morphed into technology-scale risks that didn’t exist before. By lowering some forms of uncertainty, these automated algorithms have ultimately increased it. 

A striking example of this is algorithmic trading, where software is used to eradicate uncertainty and upgrade financial systems. It is, however, impossible to test sophisticated permutation of every pathway in a software decision tree, meaning that even the most sophisticated upgrades inevitably introduce new uncertainties. Subtle errors, camouflaged in labyrinthine webs of code, become imperceptible at the lightning speed of execution. In August 2012, when the NYSE’s Retail Liquidity Program went live, global financial services firm Knight Capital was equipped with a high-frequency trading algorithm. Unfortunately, an overnight glitch in the code amplified the error to a disastrous degree, costing Knight Capital $440 million in just 30 minutes.

As technology becomes more sophisticated, it not only eradicates the uncertainty of time and distance from our everyday lives but also transforms how we experience uncertainty itself. An app informs you exactly when the bus you are waiting for will arrive, a check mark tells you when your friend has not only received but read your message, and a ding lets you know someone is waiting on your doorstep when you are on vacation on a different continent. This information is often incredibly useful. Yet, the same technology floods us with unsolicited, irrelevant details. Worse, it often captures our attention by delivering fragments of incomplete information: a partial news headline pops up on our phone, an alert from our home security system reports unusual activity on our property, a new friend request slides into our social media inbox. Resolving these uncertainties requires us to swipe, click, or watch, only to be bombarded with yet another stream of incomplete information. Instead of resolving uncertainty, the information often leaves us with more of it.

Rarely do we stop to ask ourselves if the kinds of frequent, small-scale uncertainties that modern technology is designed to eliminate are really so terrible in the first place. If we did, we might realize that human-scale uncertainties make us more resilient, revealing weaknesses we did not know we had.

Historical evidence suggests that eliminating uncertainty isn’t always beneficial. Angkor, the medieval capital of the ancient Khmer empire, became the largest pre-industrial city in the world partly because its population was able to tame the uncertainty of nature through creating an elaborate water management network. This system eliminated the unpredictability of monsoon rains, sustaining Angkor’s agrarian population, which grew to nearly a million. Yet this very system may also have contributed to the city’s collapse. When Angkor was struck by severe droughts and violent monsoons in the 14th and 15th centuries, their reliance on guaranteed water supplies left its people vulnerable to disaster.

The uncertainty paradox does not stem from innovation in itself. Innovating solutions for large scale uncertainties has manifestly saved countless lives. Modern day examples include Sanitation technology that has helped to eradicate cholera in many parts of the world and Tuned Mass Damper (TMD) technology that protected the Taipei 101 skyscraper during a 7.4 magnitude earthquake in 2024. Instead, the uncertainty paradox seems to emerge when we seek to erase smaller scale,  everyday uncertainties entirely from our lives. This can make us more vulnerable, as we forget how to deal with unexpected uncertainty when it finally strikes. One solution is to deliberately create opportunities to experience and rehearse dealing with uncertainty. Hong Kong’s resilience in the face of intense typhoons stems from regular exposure to monsoon rains—preparing the city to withstand storms that could devastate other parts of the world.

Netflix engineers Yury Izrailevsky and Ariel Tseitlin captured this idea in their creation of “Chaos Monkey,” a tool that deliberately introduces system failures so engineers can identify weaknesses and build better recovery mechanisms. Inspired by this concept, many organizations now conduct “uncertainty drills” to prepare for unexpected challenges. However, while drills prepare us for the known scenarios, true resilience requires training our reactions to uncertainty itself—not just our responses to specific situations. Athletes and Navy SEALS incorporate deliberate worst-case scenarios in their training to build mental fortitude and adaptability in the face of the unknown.  

The relationship between uncertainty and technology is like an Ouroboros: we create technology to eliminate uncertainty, yet that technology generates new uncertainties that we must eliminate all over again. Rather than trying to break this cycle, the solution may be paradoxical: to make the world feel more certain, we might need to embrace a little more uncertainty every day.

Regulating AI Is Easier Than You Think

Female engineer inspecting wafer chip in laboratory

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

[time-brightcove not-tgx=”true”]

In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world’s electricity, and only nine countries possess nuclear weapons.

Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world’s most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.

The most advanced AI systems are trained on tens of thousands of highly specialized computer chips. These chips are housed in massive data centers where they churn on data for months to train the most capable AI models. These advanced chips are difficult to produce, the supply chain is tightly controlled, and large numbers of them are needed to train AI models. 

Governments can establish a regulatory regime where only authorized computing providers are able to acquire large numbers of advanced chips in their data centers, and only licensed, trusted AI companies are able to access the computing power needed to train the most capable—and most dangerous—AI models. 

This may seem like a tall order. But only a handful of nations are needed to put this governance regime in place. The specialized computer chips used to train the most advanced AI models are only made in Taiwan. They depend on critical technology from three countries—Japan, the Netherlands, and the U.S. In some cases, a single company holds a monopoly on key elements of the chip production supply chain. The Dutch company ASML is the world’s only producer of extreme ultraviolet lithography machines that are used to make the most cutting-edge chips.

Read More: The 100 Most Influential People in AI 2024

Governments are already taking steps to govern these high-tech chips. The U.S., Japan, and the Netherlands have placed export controls on their chip-making equipment, restricting their sale to China. And the U.S. government has prohibited the sale of the most advanced chips—which are made using U.S. technology—to China. The U.S. government has also proposed requirements for cloud computing providers to know who their foreign customers are and report when a foreign customer is training a large AI model that could be used for cyberattacks. And the U.S. government has begun debating—but not yet put in place—restrictions on the most powerful trained AI models and how widely they can be shared. While some of these restrictions are about geopolitical competition with China, the same tools can be used to govern chips to prevent adversary nations, terrorists, or criminals from using the most powerful AI systems.

The U.S. can work with other nations to build on this foundation to put in place a structure to govern computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, training AI models, and the trained models that are the result of this production cycle. 

Japan, the Netherlands, and the U.S. can help lead the creation of a global governance framework that permits these highly specialized chips to only be sold to countries that have established regulatory regimes for governing computing hardware. This would include tracking chips and keeping account of them, knowing who is using them, and ensuring that AI training and deployment is safe and secure.

But global governance of computing hardware can do more than simply keep AI out of the hands of bad actors—it can empower innovators around the world by bridging the divide between computing haves and have nots. Because the computing requirements to train the most advanced AI models are so intense, the industry is moving toward an oligopoly. That kind of concentration of power is not good for society or for business.

Some AI companies have in turn begun publicly releasing their models. This is great for scientific innovation, and it helps level the playing field with Big Tech. But once the AI model is open source, it can be modified by anyone. Guardrails can be quickly stripped away.

The U.S. government has fortunately begun piloting national cloud computing resources as a public good for academics, small businesses, and startups. Powerful AI models could be made accessible through the national cloud, allowing trusted researchers and companies to use them without releasing the models on the internet to everyone, where they could be abused.  

Countries could even come together to build an international resource for global scientific cooperation on AI. Today, 23 nations participate in CERN, the international physics laboratory that operates the world’s most advanced particle accelerator. Nations should do the same for AI, creating a global computing resource for scientists to collaborate on AI safety, empowering scientists around the world.

AI’s potential is enormous. But to unlock AI’s benefits, society will also have to manage its risks. By controlling the physical inputs to AI, nations can securely govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.

❌