Normal view

There are new articles available, click to refresh the page.
Today — 21 September 2024Technology
Yesterday — 20 September 2024Technology

How AI Could Transform Fast Fashion for the Better—and Worse

20 September 2024 at 12:00
Shein-AI-fast-fashion

Since Shein became the world’s most popular online shopping destination—with seemingly unbeatable prices, and influencers posting “haul” videos to show off their purchases on social media—the Chinese fast-fashion giant has raised questions over how it produces its plethora of merchandise at dizzying speeds. The answer: AI-powered algorithms that allow the company to pick up changes in customer demand and interest, allowing it to adjust its supply chain in real time. As a result, Shein reportedly lists as many as 600,000 items on its online platform at any given moment, selling to customers in over 220 countries and regions globally.

[time-brightcove not-tgx=”true”]

But the company has also long been under scrutiny for its poor record on environmental sustainability, becoming fashion’s biggest polluter in 2023. Investigations into Shein’s supply chains have found severe labor rights violations, with factory workers in Southern Chinese manufacturing plants reporting grueling 75-hour work weeks to keep up with demand. 

Shein claims AI is the answer to solving these problems, too. During a retail conference in Berlin in January, Peter Pernot-Day, Shein’s head of global strategy and corporate affairs, explained that more than 5,000 Shein suppliers recently gained access to an AI software platform to analyze customer preferences—information that the company then uses to produce small batches of merchandise to match supply in real-time. “We are using machine-learning technologies to accurately predict demand in a way we think is cutting-edge,” Peter Pernot-Day said. “The net effect of this is reducing inventory waste.”

Shein isn’t the only fast-fashion company to tout the benefits of AI in transforming the fast fashion business. Many of its competitors, including H&M and Zara, have also turned to machine-learning technology to analyze sales data and understand customer demand by predicting trends, tracking inventory levels, and cutting down on operational costs. Retail experts are equally optimistic about the power of generative AI: a recent report by McKinsey suggests that AI could add up to $275 billion to the operating profits of apparel, fashion, and luxury sectors in the next three to five years.

“We are already seeing significant shifts in fast fashion with the use of GenAI,” says Holger Harreis, a senior partner at McKinsey who co-authored the report. Harries adds that in the long term, this could result in more personalized processes in fashion, “even to levels of quasi-bespoke tailoring with colors, styles, and sizes—all delivered by a heavily genAI-led process with human interventions focused on where humans add the most value.”

As Shein uses AI to optimize its supply chain, however, environmental experts question whether these claimed efficiencies are truly improving outcomes. “Without strong ethical, social, and environmental standards in place, AI could just as easily be driving faster production and overconsumption,” says Lewis Perkins, the president of the Apparel Impact Institute, a global nonprofit that measures the fashion industry’s climate impact. 

Companies promise waste reduction as consumption soars

As the world’s second-largest industrial polluter, fast fashion releases 1.2 billion tonnes of carbon emissions every year, accounting for 10% of global emissions, according to research from the European Environment Agency. But no company has been as prolific in generating emissions in recent years as Shein. The company’s 2023 sustainability report recorded a carbon footprint of 16.7 million tonnes last year—nearly triple the number of emissions it produced in the previous three years. Shein’s record has also soared past Zara, previously fashion’s biggest emitter, and is roughly double that of companies like Nike, H&M, and LVMH. 

Founded by Chinese billion Sky Xu in 2008, Shein became a go-to destination for online shopping during the pandemic after listing nearly 600,000 items on its marketplace. By November 2022, it was accounting for 50% of fast-fashion sales in the U.S. One in four Gen Z consumers now shop at Shein, while 44% make at least one Shein purchase monthly, according to research from EMARKETER. Shein reports that 61% of its carbon footprint came from its supply chain, while 38% came from transporting goods from its facilities to customers. In July alone, Shein sent about 900,000 packages to customers by air. 

The sustainability report also highlighted how the company plans to reduce emissions. That includes moving production hubs closer to the customers, launching a $222 million circularity fund to promote textile-to-textile recycling, and setting a 25% reduction target for emissions by 2030. While Shein did not respond to TIME’s request for comment, a spokesperson for the company recently told Grist that the company is increasing inventory in U.S. warehouses and using cargo ships to deliver to customers. The company also reiterated that AI would further help to reduce waste, asserting that “we do not see growth as antithetical to sustainability.”

Read More: Shein Is the World’s Most Popular Fashion Brand—at a Huge Cost to Us All

There is new research that could back these claims. A study by the UNSW Institute for Climate Risk & Response found that companies can harness AI-driven technologies for climate action to analyze their carbon footprint, as well as devise strategies to make these improvements. 

“In short, AI will improve the firms’ entire value chain in ways that help them avoid, mitigate, or offset the environmental impacts of their products, services, or processes,” says David Grant, who co-authored this study with colleague Shahriar Akter. Grant adds that much of this work can be done far more quickly and more accurately with AI, as opposed to humans. “The benefits to the environment, specifically in respect of climate change, are thus far greater than would otherwise be achieved,” he says. 

But still, the authors of the study warn against the risks posed by AI in the fast fashion supply chain, specifically through a “vicious circle of overconsumption, pollution, and exploitation,” says Akter, pointing to Shein’s ability to predict demand and manufacture garments at “lightning-fast speed,” which puts added strain on factory workers to churn out garments even faster.

Algorithms feed on copyrighted work

Generative AI’s risks don’t stop at the supply chain. Akter from UNSW adds that the technology is also susceptible to breaching copyrights and compromising the artistic quality of human creativity. 

In April, Connecticut-based artist and designer Alan Giana filed a lawsuit in New York’s Southern District against Shein, alleging that the company’s use of AI, machine learning and algorithms were systematically infringing on his copyrighted work. Citing “Coastal Escape,” artwork that appeared on Shein’s website without permission or attribution, the complaint alleges that “widespread copyright infringement is baked into the business” by using sophisticated electronic systems that “algorithmically scour the internet for popular works by artists.” It went further by stating that the infringement likely extends to “thousands or tens of thousands of other persons” in the U.S. 

Shein has been faced with dozens of similar lawsuits alleging design theft in the past. In July 2023, three graphic designers in China sued Shein for using “secretive algorithms” to identify trends and copy their designs. The complaint went so far as to say that the company’s copyright infringement was so aggressive that it amounted to “racketeering.” In response, Shein told NBC that it took all claims of infringement “seriously:” “We take swift action when complaints are raised by valid IP rights holders,” it stated. 

Akter from UNSW says that generative AI-based designs “might result in breaching copyrights and put a company in a questionable situation,” adding that it could also result in “algorithmic monoculture,” pushing fashion companies to rely on similar algorithms and causing them to lose the necessary creativity in fashion retailing. Moreover, he says that AI-based marketing models could also result in algorithmic bias extending to race, gender, sexual orientation, social class, religion, and ethnicity. 

But despite these risks, more brands are investing significant amounts of their budget in AI. McKinsey’s Harreis is optimistic about its ability to optimize production and reduce waste, but he adds that companies still face a big challenge. “In order for tech to add value, companies need to realize that it is never just about tech, it takes rewiring the entire organization,” he says. 

AI can help bring a systemic shift in design, production, and consumption, says Perkins at the Apparel Impact Institute, but only if it is “paired with responsible business practices, transparent supply chains, and a commitment to reducing overall impact.” It’s not impossible to imagine what this might look like. Perkins points to innovators like Made2Flow, which uses AI-driven data analytics to measure and optimize environmental impact across the fashion supply chain. Similarly, Smartex.Ai leverages AI to detect and reduce fabric defects, leading to lower material waste.

But if AI is used solely to speed up production and push more products to market, it could “fuel overconsumption,” Perkins warns. “Until there’s clear evidence that AI is being used to genuinely reduce the fashion industry’s environmental footprint, I remain cautious about how much positive impact this model is actually having,” he says.

Before yesterdayTechnology

6 Questions About the Deadly Exploding Pager Attacks in Lebanon, Answered

18 September 2024 at 18:00
Lebanon's Health Ministry calls for blood donations after exploding pagers

When thousands of pagers and other wireless devices simultaneously exploded across Lebanon and parts of Syria this week, killing at least 15 people and injuring thousands more, it exposed what one Hezbollah official described as the “biggest security breach” the Iran-backed militant group has experienced in nearly a year of war with Israel. In a period replete with violent attacks across the region—from Israel’s bombardment of the Gaza Strip to the targeted assassinations of militant leaders in Iran and Lebanon—this was perhaps the most sophisticated and daring one yet.

[time-brightcove not-tgx=”true”]

Hezbollah confirmed that eight of its fighters were killed in the blasts taking place on Tuesday, according to the BBC. Further such explosions, this time involving two-way radios, were reported on Wednesday. Civilians haven’t been spared from the onslaught. At least two children were killed in Tuesday’s blasts, according to the country’s health minister, and thousands of others were wounded by them, some critically. Iran’s ambassador to Lebanon lost an eye as a result of one of the blasts, according to the New York Times.

Officials in the U.S. and elsewhere have left little doubt as to who might be responsible. Hezbollah and Lebanese officials quickly pointed to Israel, which in addition to waging its ongoing war with Hamas in Gaza has also been exchanging near daily blows with Hezbollah across its northern border with Lebanon since Oct. 7. Earlier this week, the Israeli government announced it was expanding its war aims to include the return of its northern residents who were evacuated from towns along the country’s northern frontier in the immediate aftermath of Oct. 7—a goal that the country’s defense minister Yoav Gallant said would be achieved through “military action.” Days earlier, Lebanese residents on the other side of the border received Israeli military leaflets ordering them to leave the area. The Israeli military has since confirmed that their distribution as an “unauthorized action,” and said that no evacuation is underway.

Still, expert observers warn that this attack, and any retaliation that might follow it, could raise the prospect of a wider war breaking out. Here are six of the biggest questions—and answers—that remain.

[video id=JtPYHsq0]

How were the explosions triggered?

Hezbollah’s widespread use of pagers—hardly considered a high-tech form of communication by most standards—was primarily a security precaution. The militant group had reportedly ordered its members to forego using mobile phones earlier this year due to concerns that they could be more easily tracked. In their place they were given AR-924 pagers, thousands of which were sourced from a Taiwan-based brand called Gold Apollo. Although the company confirmed it had licensed the use of its brand for these pagers, it declined playing any role in their manufacturing, which it said was done by the Budapest-based firm called BAC Consulting.

Footage from one of the blasts—which TIME was unable to independently verify, but which was deemed credible by the BBC—showed the moment one of these pagers exploded, emitting smoke and causing the person who appeared to be carrying it to fall to the floor.

Experts who spoke with TIME say that this wasn’t a cyberattack. Rather, it was likely the result of an infiltration in the supply chain, which makes how the pagers were manufactured and who was involved all the more critical. “The explosions were likely triggered by pre-implanted explosives, possibly activated via a radio signal, as simple as the paging system itself,” says Lukasz Olejnik, an independent researcher and consultant in cybersecurity and privacy. “The supply chain was likely compromised at some point, either in the factory or during delivery.”

While such an operation would have been difficult to execute, it isn’t beyond the capabilities of a country like Israel. “Israel is obviously still the master of intelligence in the region,” Andreas Krieg, an associate professor for security studies at King’s College London, tells TIME, noting that “it has a network of intelligence and information collection that is unparalleled.”

TOPSHOT-LEBANON-ISRAEL-PALESTINIAN-CONFLICT

What is Israel saying about it?

Israel has a long history of pulling off complex attacks of the kind seen in Lebanon. But as with the recent assassination of Hamas leader Ismail Haniyeh in Iran, it rarely takes responsibility for them. When TIME inquired about Israel’s involvement in the pager explosions, an Israeli military spokesperson declined to confirm or deny whether the country was behind the attack, offering only a two-word response: “no comment.”

But experts say that all obvious signs point to Israeli involvement. “No one else is benefiting from it, but Israel, in terms of paralyzing Hezbollah,” says Krieg, noting that the militant group has been the most strategic threat to Israel for at least the past three decades. “There are loads of people who don’t like Hezbollah in the region, including Arab countries,” he adds, “but none of them have the capability to actually do something as sophisticated as this.”

Why now?

There could be any number of reasons for why Israel would opt to launch this attack now. One theory, attributed to senior intelligence sources and reported by Al Monitor, was that the compromised status of the pagers was at risk of being imminently discovered. Another is that Israel perhaps hoped the attack would act as a deterrent following recent revelations that the country’s security service foiled an attempt by Hezbollah to assassinate a former senior Israeli security official using a remotely detonated explosive device.

There’s also the possibility that Israel, having made moving its displaced population back to their homes in the north of Israel among its war aims, wanted to pressure Hezbollah into moving its forces away from the nearby Israel-Lebanon border.

While some observers fear that the attack could have been initiated as a prelude to a wider Israeli military incursion in Lebanon, Krieg says such an escalation would be in neither party’s interests, recent comments from the Israeli defense minister notwithstanding. “This paralysis of [Hezbollah] being unable to communicate effectively with one another is certainly something that could be a preparation, a first step, of such an operation,” he says. “But I don’t think that’s likely.”

Will Hezbollah retaliate?

Hezbollah pledged on Wednesday that it will continue its military operations against Israel in order to “support Gaza,” and warned that Israel will face a “difficult reckoning” as a result of the pager attack, which it called a “massacre.” The armed group’s leader, Hassan Nasrallah, is expected to deliver a speech addressing the attack on Thursday.

How are governments around the world reacting?

A State Department spokesperson, who declined to comment on suspicions that the attacks were carried out by Israel, confirmed that the U.S. had no prior knowledge of the attack, telling reporters on Tuesday that Washington was neither aware nor involved with the operation.

“That’s probably true because I think the [Biden] administration would try and talk them out of it, because they would say it’s escalatory,” Michael Allen, the former National Security Council director for President George W. Bush, tells TIME.

Across the Atlantic, the E.U.’s foreign policy chief Josep Borrel condemned the attacks in a statement, warning that they “endanger the security and stability of Lebanon, and increase the risk of escalation in the region.” He notably did not mention Israel in the statement, opting instead to urge all stakeholders to “avert an all-out war.”

The Iranian government, which backs and sponsors Hezbollah, condemned the attack as a “terrorist act.”

Does this attack constitute a war crime?

While the attack may have targeted pagers used by Hezbollah, that doesn’t necessarily mean that those in possession of them were armed militants. “Hezbollah is obviously the fighting wing, but Hezbollah is [also] a political party, it’s a charity organization, it’s a civil societal movement as well,” says Krieg. “And so this pager system would have been distributed among civilians as well—people who are not fighters, who are not contributing to the war effort, and they were targeted as well.”

It’s precisely for this reason that the use of booby traps are prohibited under international law. “The use of an explosive device whose exact location could not be reliably known would be unlawfully indiscriminate, using a means of attack that could not be directed at a specific military target and as a result would strike military targets and civilians without distinction,” Lama Fakih, the Beirut-based Middle East and North Africa director at Human Rights Watch, said in a statement. “

“Simultaneous targeting of thousands of individuals, whether civilians or members of armed groups, without knowledge as to who was in possession of the targeted devices, their location and their surroundings at the time of the attack, violates international human rights law and, to the extent applicable, international humanitarian law,” Volker Turk, the U.N.’s High Commissioner for Human Rights, said in a statement on Wednesday, adding that those who ordered and carried out the attacks “must be held to account.”

Why Sam Altman Is Leaving OpenAI’s Safety Committee

17 September 2024 at 17:26
US-TECHNOLOGY-AI-MICROSOFT-COMPUTERS

OpenAI’s CEO Sam Altman is stepping down from the internal committee that the company created to advise its board on “critical safety and security” decisions amid the race to develop ever more powerful artificial intelligence technology.

The committee, formed in May, had been evaluating OpenAI’s processes and safeguards over a 90-day period. OpenAI published the committee’s recommendations following the assessment on Sept. 16. First on the list: establishing independent governance for safety and security.

[time-brightcove not-tgx=”true”]

As such, Altman, who, in addition to serving OpenAI’s board, oversees the company’s business operations in his role as CEO, will no longer serve on the safety committee. In line with the committee’s recommendations, OpenAI says the newly independent committee will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University, who joined OpenAI’s board in August. Other members of the committee will include OpenAI board members Quora co-founder and CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Entertainment president Nicole Seligman. Along with Altman, OpenAI’s board chair Bret Taylor and several of the company’s technical and policy experts will also step down from the committee.

Read more: The TIME100 Most Influential People in AI 2024

The committee’s other recommendations include enhancing security measures, being transparent about OpenAI’s work, and unifying the company’s safety frameworks. It also said it would explore more opportunities to collaborate with external organizations, like those used to evaluate OpenAI’s recently released series of reasoning models o1 for dangerous capabilities.

The Safety and Security Committee is not OpenAI’s first stab at creating independent oversight. OpenAI’s for-profit arm, created in 2019, is controlled by a non-profit entity with a “majority independent” board, tasked with ensuring it acts in accordance with its mission of developing safe broadly beneficial artificial general intelligence (AGI)—a system that surpasses humans in most regards.

In November, OpenAI’s board fired Altman, saying that he had not been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” After employees and investors revolted—and board member and company president Greg Brockman resigned—he was swiftly reinstated as CEO, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever resigned. Brockman later returned as president of the company.

Read more: A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

The incident highlighted a key challenge for the rapidly growing company. Critics including Toner and McCauley argue that having a formally independent board isn’t enough of a counterbalance to the strong profit incentives the company faces. Earlier this month, Reuters reported that OpenAI’s ongoing fundraising efforts, which could catapult its valuation to $150 billion, might hinge on changing its corporate structure.

Toner and McCauley say board independence doesn’t go far enough and that governments must play an active role in regulating AI. “Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable,” the former board members wrote in the Economist in May, reflecting on OpenAI’s November boardroom debacle. 

In the past, Altman has urged regulation of AI systems, but OpenAI also lobbied against California’s AI bill, which would mandate safety protocols for developers. Going against the company’s position, more than 30 current and former OpenAI employees have publicly supported the bill.

The Safety and Security Committee’s establishment in late May followed a particularly tumultuous month for OpenAI. Ilya Sutskever and Jan Leike, the two leaders of the company’s “superalignment” team, which focused on ensuring that if AI systems surpass human-level intelligence, they remain under human control, resigned. Leike accused OpenAI of prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. The same month, OpenAI came under fire for asking departing employees to sign agreements that prevented them from criticizing the company or forfeit their vested equity. (OpenAI later said that these provisions had not and would not be enforced and that they would be removed from all exit paperwork going forward).

Elon Musk’s New AI Data Center Raises Alarms Over Pollution

17 September 2024 at 15:48
Elon Musk's xAI to Develop New Supercomputer in Memphis

In July, Elon Musk made a bold prediction: that his artificial intelligence startup xAI would release “the most powerful AI in the world,” a model called Grok 3, by this December. The bulk of that AI’s training, Musk said, would happen at a “massive new training center” in Memphis, which he bragged had been built in 19 days.

[time-brightcove not-tgx=”true”]

But many residents of Memphis were taken by surprise, including city council members who said they were given no input about the project or its potential impacts on the city. Data centers like this one use a vast amount of electricity and water. And in the months since, an outcry has grown among community members and environmental groups, who warn of the plant’s potential negative impact on air quality, water access, and grid stability, especially for nearby neighborhoods that have suffered from industrial pollution for decades. These activists also contend that the company is illegally operating gas turbines.

“This continues a legacy of billion-dollar conglomerates who think that they can do whatever they want to do, and the community is just not to be considered,” KeShaun Pearson, executive director of the nonprofit Memphis Community Against Pollution, tells TIME. “They treat southwest Memphis as just a corporate watering hole where they can get water at cheaper price and a place to dump all their residue without any real oversight or governance.” 

Some local leaders and utility companies, conversely, contend that xAI will be a boon for local infrastructure, employment, and grid modernization. Given the massive scale of this project, xAI’s foray into Memphis will serve as a litmus test of whether the AI-fueled data center boom might actually improve American infrastructure—or harm the disadvantaged just like so many power-hungry industries of decades past.

Artificial intelligence company xAI

“The largest data center on the planet”

In order for AI models to become smarter and more capable, they must be trained on vast amounts of data. Much of this training now happens in massive data centers around the world, which burn through electricity often accessed directly from public power sources. A recent report from Morgan Stanley estimates that data centers will emit three times more carbon dioxide by the end of the decade than if generative AI had not been developed. 

Read More: How AI Is Fueling a Boom in Data Centers and Energy Demand

The first version of Grok launched last year, and Musk has said he hopes it will be an “anti-woke” competitor to ChatGPT. (In practice, for example, this means it is able to generate controversial images that other AI models will not, including Nazi Mickey Mouse.) In recent interviews, Musk has stressed the importance of Grok ingesting as much data as possible to catch up with his competitors. So xAI built its data center, called Colossus, in Southwest Memphis, near Boxtown, a historically Black community, to do a bulk of the training. Ebby Amir, a technologist at xAI, boasted that the new site was “the largest AI datacenter on the planet.”

Local leaders said the plant would offer “good-paying jobs” and “significant additional revenues” for the local utility company. Memphis Mayor Paul Young praised the project in a statement, saying that the new xAI training center would reside on an “ideal site, ripe for investment.” 

But other local officials and community members soon became frustrated with the project’s lack of details. The Greater Memphis Chamber and Memphis, Gas, Light, and Water Division (MLGW) signed a non-disclosure agreement with xAI, citing privacy of economic development. Some Memphis council members heard about the project on the news. “It’s been pretty astounding the lack of transparency and the pace at which this project has proceeded,” Amanda Garcia, a senior attorney at the Southern Environmental Law Center, says. “We learn something new every week.”

For instance, there’s a major divide between how much electricity xAI wants to use, and how much MLGW can provide. In August, the utility company said that xAI would have access to 50 megawatts of power. But xAI wants to use triple that amount—which, for comparison, is enough energy to power 80,000 households. 

MLGW said in a statement to TIME that xAI is paying for the technical upgrades that enable them to double their power usage—and that in order for the company to reach the full 150 megawatts, there will need to be $1.7 million in improvements to a transmission line. “There will be no impact to the reliability of availability of power to other customers from this electric load,” the company wrote. They also added that xAI would be required to reduce its electricity consumption during times of peak demand, and that any infrastructure improvement costs would not be borne by taxpayers.

In response to complaints about the lack of communication with council members, MLGW wrote: “xAI’s request does not require approvals from the MLGW Board of Commissioners or City Council.” 

But community members worry whether Memphis’s utilities can handle such a large consumer of energy. In the past, the city’s power grid has been forced into rolling blackouts by ice storms and other severe weather events.

And Garcia, at the SELC, says that while xAI waits for more power to become available, they’ve turned to non-legal measures to sate their demand, by installing gas combustion turbines on the site that they are operating without a permit. Garcia says the SELC has observed the installation of 18 such turbines, which have the capacity to emit 130 tons of harmful nitrogen oxides per year. The SELC and community groups sent a letter to the Shelby County Health Department demanding their removal—but the health department responded by claiming the turbines were out of their authority, and referred them to the EPA. The EPA told NPR that it was “looking into the matter.” A representative for xAI did not immediately respond to a request for comment.

Much of Memphis is already smothered by harmful pollution. The American Lung Association currently gives Shelby County, which contains Memphis, an “F” grade for its smog levels, writing, “the air you breathe may put your health at risk.” A local TV report this year named Boxtown the most polluted neighborhood in Memphis, especially during the summer.

Boxtown and its surrounding neighborhoods have historically suffered from poverty and pollution. Southwest Memphis’s cancer rate is four times the national average, according to a 2013 study, and life expectancy in at least one South Memphis neighborhood is 10 years lower than other parts of the city, a 2020 study found. The Tennessee Valley Authority has been dumping contaminated coal ash in a nearby landfill. And a Sterilization Services of Tennessee facility was finally closed last year after emitting ethylene oxide into the air for decades, which the EPA linked to increased cancer risk in South Memphis.

A representative for the Greater Memphis Chamber, which worked to bring xAI to Memphis, wrote to TIME in response to a request for comment: “We will not be participating in your narrative.”

City of Memphis struggles with lead pipes and water company doing patrial replacements.

Potential impact on water

Environmentalists are also concerned about the facility’s use of water. “Industries are attracted to us because we have some of the purest water in the world, and it is dirt cheap to access,” says Sarah Houston, the executive director of the local environmental group of the nonprofit Protect Our Aquifer.

Data centers use water to cool their computers and stop them from overheating. So far xAI has drawn 30,000 gallons from the Memphis Sand Aquifer, the region’s drinking water supply, every day since beginning its initial operations, according to MLGW—who added that the company’s water usage would have “no impact on the availability of water to other customers.”

But Houston and other environmentalists are concerned especially because Memphis’s aging water infrastructure is more than a century old and has failed several winters in a row, leading to boil advisories and pleas to residents to conserve water usage during times of stress. “xAI is just an additional industrial user pumping this 2,000 year old pure water for a non-drinking purpose,” Houston says. “When you’re cooling supercomputers, it doesn’t seem to warrant this super pure ancient water that we will never see again.”

Memphis’s drinking water has also been threatened by contamination. In 2022, the Environmental Integrity Project and Earthjustice claimed that a now-defunct coal plant in Memphis was leaking arsenic and other dangerous chemicals into the groundwater supply, and ranked it as one of the 10 worst contaminated coal ash sites in the country. And because xAI sits close to the contaminated well in question, Houston warns that its heavy water usage could exacerbate the problem. “The more you pump, the faster contaminants get pulled down towards the water supply,” she says.

MLGW contends that xAI’s use of Memphis’s drinking water is temporary, because xAI is assisting in the “ the design and proposed construction” of a graywater facility that will treat wastewater so that it can be used to cool data centers machines. MLGW is also trying to get Musk to provide a Tesla Megapack, a utility-scale battery, as part of the development.

Houston says that these solutions will be beneficial to the city—if they come to fruition. “We fully support xAI coming to the table and being a part of this solution,” she says. “But right now, it’s been empty promises.”

“We’re not opposed to ethical economic development and business moving into town,” says Garcia. “But we need some assurance that it’s not going to make what is already an untenable situation worse.” 

Disproportionate harm

For Pearson, of Memphis Community Against Pollution, the arrival of xAI is concerning because as someone who grew up in Boxtown, he says he’s seen how other major corporations have treated the area. Over the years, Memphis has dangled tax breaks and subsidies to persuade industrial companies to set up shop nearby. But many of those projects have not led to lasting economic development, and have seemingly contributed to an array of health problems of nearby residents.

For instance, city, county and state officials lured the Swedish home appliance manufacturer Electrolux to Memphis in 2013 with $188 million in subsidies. The company’s president told NPR that it intended to provide good jobs and stay there long-term. Six years later, the company announced it would shut down their facility to consolidate resources for another location, laying off over 500 employees, in a move that blindsided even Mayor Jim Strickland. Now, xAI has taken over that Electrolux plant, which spans 750,000 square feet.

“Companies choose Memphis because they believe it is the path of least resistance: They come here, build factories, pollute the air, and move on,” Pearson says.

Pearson says that community organizations of southwest Memphis have had no contact or dialogue with xAI and its plans in the area whatsoever; that there’s been no recruiting in the community related to jobs, or any training related to workplace development. When presented with claims that xAI will economically benefit the local community, he harbors many doubts. 

“This is the same playbook, and the same talking points passed down and passed around by these corporate colonialists,” Pearson says. “For us, it is empty, it’s callous, and it’s just disingenuous to continue to regurgitate these things without actually having plans of implementation or inclusion.”

Instagram Introduces Teen Accounts, Other Sweeping Changes to Boost Child Safety Online

17 September 2024 at 12:40
Instagram Teen Accounts

Instagram is introducing separate teen accounts for those under 18 as it tries to make the platform safer for children amid a growing backlash against how social media affects young people’s lives.

Beginning Tuesday in the U.S., U.K., Canada and Australia, anyone under under 18 who signs up for Instagram will be placed into a teen account and those with existing accounts will be migrated over the next 60 days. Teens in the European Union will see their accounts adjusted later this year.

[time-brightcove not-tgx=”true”]

Meta acknowledges that teenagers may lie about their age and says it will require them to verify their ages in more instances, like if they try to create a new account with an adult birthday. The Menlo Park, California company also said it is building technology that proactively finds teen accounts that pretend to be grownups and automatically places them into the restricted teen accounts.

Read More: The U.S. Surgeon General Fears Social Media Is Harming the ‘Well-Being of Our Children’

The teen accounts will be private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. “Sensitive content,” such as videos of people fighting or those promoting cosmetic procedures, will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a “sleep mode” will be enabled that turns off notifications and sends auto-replies to direct messages from 10 p.m. until 7 a.m.

While these settings will be turned on for all teens, 16 and 17-year-olds will be able to turn them off. Kids under 16 will need their parents’ permission to do so.

“The three concerns we’re hearing from parents are that their teens are seeing content that they don’t want to see or that they’re getting contacted by people they don’t want to be contacted by or that they’re spending too much on the app,” said Naomi Gleit, head of product at Meta. “So teen accounts is really focused on addressing those three concerns.”

The announcement comes as the company faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In the past, Meta’s efforts at addressing teen safety and mental health on its platforms have been met with criticism that the changes don’t go far enough. For instance, while kids will get a notification when they’ve spent 60 minutes on the app, they will be able to bypass it and continue scrolling.

That’s unless the child’s parents turn on “parental supervision” mode, where parents can limit teens’ time on Instagram to a specific amount of time, such as 15 minutes.

With the latest changes, Meta is giving parents more options to oversee their kids’ accounts. Those under 16 will need a parent or guardian’s permission to change their settings to less restrictive ones. They can do this by setting up “parental supervision” on their accounts and connecting them to a parent or guardian.

Nick Clegg, Meta’s president of global affairs, said last week that parents don’t use the parental controls the company has introduced in recent years.

Gleit said she thinks teen accounts will create a “big incentive for parents and teens to set up parental supervision.”

“Parents will be able to see, via the family center, who is messaging their teen and hopefully have a conversation with their teen,” she said. “If there is bullying or harassment happening, parents will have visibility into who their teen’s following, who’s following their teen, who their teen has messaged in the past seven days and hopefully have some of these conversations and help them navigate these really difficult situations online.”

U.S. Surgeon General Vivek Murthy said last year that tech companies put too much on parents when it comes to keeping children safe on social media.

“We’re asking parents to manage a technology that’s rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,” Murthy said in May 2023.

Meta Is Globally Banning Russian State Media on Its Apps, Citing ‘Foreign Interference’

17 September 2024 at 08:00

Social media company Meta—the parent company of Facebook, Instagram, and WhatsApp—announced Monday that it will ban RT and other Russian state media from its apps worldwide, days after the State Department announced sanctions against Kremlin-coordinated news organizations.

“After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a statement provided to TIME.

[time-brightcove not-tgx=”true”]

Before the ban, RT had over 7 million followers on Facebook, while its Instagram account had over a million followers.

The move is an escalation of actions Meta announced in 2022, after Russia’s invasion of Ukraine, to limit the spread of Russian disinformation, which at the time included labeling and demoting posts with links to Russian state-controlled media outlets and demonetizing the accounts of those outlets and prohibiting them from running ads. The company also complied with E.U. and U.K. government requests to restrict access to RT and Sputnik in those territories. In response, in March 2022, Russia blocked access to Facebook and Instagram in the country. 

Meta’s latest actions come after Secretary of State Antony Blinken said in a press conference on Friday that the U.S. government has concluded Rossiya Segodnya and five of its subsidiaries, including RT, “are no longer merely firehoses of Russian Government propaganda and disinformation; they are engaged in covert influence activities aimed at undermining American elections and democracies, functioning like a de facto arm of Russia’s intelligence apparatus.” Sanctions unveiled Friday were imposed on RT’s parent company TV-Novosti as well as on Rossiya Segodnya and its general director Dmitry Kiselyov, and the State Department issued a notice “alerting the world to RT’s covert global activities.” Russian President Vladimir Putin’s spokesperson, Dmitry Peskov, told the Associated Press that the State Department’s allegations were “nonsense.”

Meta’s new global ban follows a similar YouTube global ban on Russian state-funded media channels, while TikTok and X (formerly Twitter) block access to RT and Sputnik in the E.U. and U.K.

At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

17 September 2024 at 04:55

Inventor and futurist Ray Kurzweil, researcher and Brookings Institution fellow Chinasa T. Okolo, director of the U.S. Artificial Safety Institute (AISI) Elizabeth Kelly, and Cognizant CEO Ravi Kumar S, discussed the transformative power of AI during a panel at a TIME100 Impact Dinner in San Francisco on Monday. During the discussion, which was moderated by TIME’s editor-in-chief Sam Jacobs, Kurzweil predicted that we will achieve Artificial General Intelligence (AGI), a type of AI that might be smarter than humans, by 2029.

[time-brightcove not-tgx=”true”]

“Nobody really took it seriously until now,” Kurzweil said about AI. “People are convinced it’s going to either endow us with things we’d never had before, or it’s going to kill us.”

Cognizant sponsored Monday’s event, which celebrated the 100 most influential people leading change in AI. The TIME100 AI spotlights computer scientists, business leaders, policymakers, advocates, and others at the forefront of big changes in the industry. Jacobs probed the four panelists—three of whom were named to the 2024 list—about the opportunities and challenges presented by AI’s rapid advancement.

Kumar discussed the potential economic impact of generative AI and cited a new report from Cognizant which says that generative AI could add more than a trillion dollars annually to the US economy by 2032. He identified key constraints holding back widespread adoption, including the need for improved accuracy, cost-performance, responsible AI practices, and explainable outputs. “If you don’t get productivity,” he said, “task automation is not going to lead to a business case stacking up behind it.”

Okolo highlighted the growth of AI initiatives in Africa and the Global South, citing the work of professor Vukosi Marivate from the University of Pretoria in South Africa, who has inspired a new generation of researchers within and outside the continent. However, Okolo acknowledged the mixed progress in improving the diversity of languages informing AI models, with grassroots communities in Africa leading the charge despite limited support and funding.

Kurzweil said that he was excited about the potential of simulated biology to revolutionize drug discovery and development. By simulating billions of interactions in a matter of days, he noted, researchers can accelerate the process of finding treatments for diseases like cancer and Alzheimer’s. He also provided a long-term perspective on the exponential growth of computational power, predicting a sharper so-called S-curve (a slow start, then rapid growth before leveling off) for AI disruption compared to previous technological revolutions.

Read more: The TIME100 Most Influential People in AI 2024

Kelly addressed concerns about AI’s potential for content manipulation in the context of the 2024 elections and beyond. “It’s going to matter this year, but it’s going to matter every year more and more as we move forward,” she noted. She added that AISI is working to advance the science to detect synthetically created content and authenticate genuine information.

Kelly also noted that lawmakers have been focusing on AI’s risks and benefits for some time, with initiatives like the AI Bill of Rights and the AI Risk Management Framework. “The president likes to use the phrase ‘promise and peril,’ which I think pretty well captures it, because we are incredibly excited about stimulant biology and drug discovery and development while being aware of the flip side risks,” she said.

As the panel drew to a close, Okolo urged attendees, which included nearly 50 other past and present TIME100 AI honorees, to think critically about how they develop and apply AI and to try to ensure that it reaches people in underrepresented regions in a positive way. 

“A lot of times you talk about the benefits that AI has brought, you know, to people. And a lot of these people are honestly concentrated in one region of the world,” she said. “We really have to look back, or maybe, like, step back and think broader,” she implored, asking leaders in the industry to think about people from Africa to South America to South Asia and Southeast Asia. “How can they benefit from these technologies, without necessarily exploiting them in the process?”

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

At TIME100 Impact Dinner, AI Leaders Talk Reshaping the Future of AI

17 September 2024 at 04:55

TIME hosted its inaugural TIME100 Impact Dinner: Leaders Shaping the Future of AI, in San Francisco on Monday evening. The event kicked off a weeklong celebration of the TIME100 AI, a list that recognizes the 100 most influential individuals in artificial intelligence across industries and geographies and showcases the technology’s rapid evolution and far-reaching impact. 

TIME CEO Jessica Sibley set the tone for the evening, highlighting the diversity and dynamism of the 2024 TIME100 AI list. With 91 newcomers from last year’s inaugural list and honorees ranging from 15 to 77 years old, the group reflects the field’s explosive growth and its ability to attract talent from all walks of life.

[time-brightcove not-tgx=”true”]

Read More: At TIME100 Impact Dinner, AI Leaders Discuss the Technology’s Transformative Potential

The heart of the evening centered around three powerful toasts delivered by distinguished AI leaders, each offering a unique perspective on the transformative potential of AI and the responsibilities that come with it.

Reimagining power structures

Amba Kak, co-executive director of the AI Now Institute, delivered a toast that challenged attendees to look beyond the technical aspects of AI and consider its broader societal implications. Kak emphasized the “mirror to the world” quality of AI, reflecting existing power structures and norms through data and design choices.

“The question of ‘what kind of AI we want’ is really an opening to revisit the more fundamental question of ‘what is the kind of world we want, and how can AI get us there?’” Kak said. She highlighted the importance of democratizing AI decision-making, ensuring that those affected by AI systems have a say in their deployment.

Kak said she drew inspiration from frontline workers and advocates pushing back against the misuse of AI, including nurses’ unions staking their claim in clinical AI deployment and artists defending human creativity. Her toast served as a rallying cry for a more inclusive and equitable AI future.

[video id=hiE0IRej]

Amplifying creativity and breaking barriers

Comedian, filmmaker, and AI storyteller King Willonius emphasized AI’s role in lowering the bar for who can be creative and giving voice to underrepresented communities. Willonius shared his personal journey of discovery with AI-assisted music composition, illustrating how AI can unlock new realms of creative expression.

“AI doesn’t just automate—it amplifies,” he said. “It breaks down barriers, giving voices to those who were too often left unheard.” He highlighted the work of his company, Blerd Factory, in leveraging AI to empower creators from diverse backgrounds.

Willonius’ toast struck a balance between enthusiasm for AI’s creative potential and a call for responsible development. He emphasized the need to guide AI technology in ways that unite rather than divide, envisioning a future where AI fosters empathy and global connection.

[video id=78d6ibMo]

Accelerating scientific progress

AMD CEO Lisa Su delivered a toast that underscored AI’s potential to address major global challenges. Su likened the current AI revolution to the dawn of the industrial era or the birth of the internet, emphasizing the unprecedented pace of innovation in the field.

She painted a picture of AI’s transformative potential across various domains, from materials science to climate change research, and said that she was inspired by AI’s applications in healthcare, envisioning a future where AI accelerates disease identification, drug development, and personalized medicine.

“I can see the day when we accelerate our ability to identify diseases, develop therapeutics, and ultimately find cures for the most important illnesses in the world,” Su said. Her toast was a call to action for leaders to seize the moment and work collaboratively to realize AI’s full potential while adhering to principles of transparency, fairness, and inclusion.

[video id=Wau2OTyu]

The TIME100 Impact Dinner: Leaders Shaping the Future of AI was presented by Cognizant and Northern Data Group.

❌
❌