Reading view

There are new articles available, click to refresh the page.

Tesla is Recalling 125,000 Vehicles For a Seat Belt Signal Issue. Here’s What We Know

Day Two Of The 2024 VivaTech Conference

Tesla announced a recall of more than 125,000 cars that were operating with a seat belt system defect and potentially putting drivers at an increased risk of injury when on the road.

Under National Highway Traffic Safety Administration (NHTSA) federal guidelines, vehicles are required to have audible and visual seat belt reminder signals to notify drivers that their seatbelt isn’t properly fastened. The Tesla vehicles facing the recall had signals going off at improper times, the NHTSA said in a report released Thursday.

[time-brightcove not-tgx=”true”]

This recall contributes to the company’s 2.5 million recalls issued this year so far. 

In January, Tesla recalled certain 2023 Y, S and X model vehicles due to a software issue that prevented the camera image from showing images while the Teslas are in reverse. The following month, Tesla recalled almost 2.2 million vehicles because of “incorrect font size” on its instrument panel for its brake, park and antilock brake system warning lights. Then, in April, Tesla recalled all MY 2024 Cybertrucks made between November 13, 2023, to April 4, 2024 due to faulty accelerator pedals.

Tesla also had the highest accident rate of any car in 2023, according to a Lending Tree analysis last year, with 23.54 accidents per 1,000 drivers.

Here’s what we know about the most recent Tesla recall.

What exactly is defective in the models?

On certain vehicles with specific seat belt software, there is no required audible and visual seat belt reminder for drivers even when the driver’s seat belt is not fashioned, because of faulty tracing in driver’s seat occupancy. 

When did Tesla first find out about the discrepancy?

On April 18, Tesla identified this discrepancy with seat belt reminders as a part of an internal compliance audit with the 2024 Tesla Model X, and then investigated the condition through the rest of April and May. 

After Tesla completed their investigation in late May, the company voluntarily recalled the affected vehicles. 

Which models are impacted by the recall?

The Tesla models impacted by the recall include 2012-2024 Model S, 2015-2024 Model X, 2017-2023 Model 3, and 2020-2023 Model Y that are with the defective seat belt logic system.

Has there been any collisions, injuries or fatalities as a result of the issue?

In the NHTSA report, Tesla stated as of May 28, the company identified 104 warranty claims that may be related to the condition, but the company states that it is not aware Tesla is of any collisions, fatalities or injuries that may be related to the condition

How will Tesla remedy the problem for customers?

Tesla plans to send out a free over-the-air (OTA) software remedy to customers with affected vehicles in June of this year. This software will amend the software issue by relying on the driver’s seat belt buckle and the ignition status to trigger seat belt reminders.

OpenAI Says Russia, China, and Israel Are Using Its Tools for Foreign Influence Campaigns

OpenAI Photo Illustrations

OpenAI identified and removed five covert influence operations based in Russia, China, Iran and Israel that were using its artificial intelligence tools to manipulate public opinion, the company said on Thursday.

In a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used the company’s tools for a variety of “deceptive activities.” These included generating social media comments, articles, and images in multiple languages, creating names and biographies for fake accounts, debugging code, and translating and proofreading texts. These networks focused on a range of issues, including defending the war in Gaza and Russia’s invasion of Ukraine, criticizing Chinese dissidents, and commenting on politics in India, Europe, and the U.S. in their attempts to sway public opinion. While these influence operations targeted a wide range of online platforms, including X (formerly known as Twitter), Telegram, Facebook, Medium, Blogspot, and other sites, none managed to engage a substantial audience” according to OpenAI analysts. 

[time-brightcove not-tgx=”true”]

The report, the first of its kind released by the company, comes amid global concerns about the potential impact AI tools could have on the more than 64 elections happening around the world this year, including the U.S. presidential election in November. In one example cited in the report, a post by a Russian group on Telegram read, “I’m sick of and tired of these brain damaged fools playing games while Americans suffer. Washington needs to get its priorities straight or they’ll feel the full force of Texas!”

The examples listed by OpenAI analysts reveal how foreign actors largely appear to be using AI tools for the same types of online influence operations they have been carrying out for a decade. They focus on using fake accounts, comments, and articles to shape public opinion and manipulate political outcomes. “These trends reveal a threat landscape marked by evolution, not revolution,” Ben Nimmo, the principal investigator on OpenAI’s Intelligence and Investigations team, wrote in the report. “Threat actors are using our platform to improve their content and work more efficiently.”

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

OpenAI, which makes ChatGPT, says it now has more than 100 million weekly active users. Its tools make it easier and faster to produce a large volume of content, and can be used to mask language errors and generate fake engagement. 

One of the Russian influence campaigns shut down by OpenAI, dubbed “Bad Grammar” by the company, used its AI models to debug code to run a Telegram bot that created short political comments in English and Russian. The operation targeted Ukraine, Moldova, the U.S. and Baltic States, the company says. Another Russian operation known as “Doppelganger,” which the U.S. Treasury Department has linked to the Kremlin, used OpenAI’s models to generate headlines and convert news articles to Facebook posts, and create comments in English, French, German, Italian, and Polish. A known Chinese network, Spamouflage, also used OpenAI’s tools to research social media activity and generate text in Chinese, English, Japanese, and Korean that was posted across multiple platforms including X, Medium, and Blogspot. 

OpenAI also detailed how a Tel Aviv-based Israeli political marketing firm called Stoic used its tools to generate pro-Israel content about the war in Gaza. The campaign, nicknamed “Zero Zeno,” targeted audiences in the U.S., Canada, and Israel. On Wednesday, Meta, Facebook and Instagram’s parent company, said it had removed 510 Facebook accounts and 32 Instagram accounts tied to the same firm. The cluster of fake accounts, which included accounts posing as African Americans and students in the U.S. and Canada, often replied to prominent figures or media organizations in posts praising Israel, criticizing anti-semitism on campuses, and denouncing “radical Islam.” It seems to have failed to reach any significant engagement, according to OpenAI. “Look, it’s not cool how these extremist ideas are, like, messing with our country’s vibe,” reads one post in the report.

OpenAI says it is using its own AI-powered tools to more efficiently investigate and disrupt these foreign influence operations. “The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling,” the company said on Thursday. They also noted that despite the rapid evolution of AI tools, human error remains a factor. “AI can change the toolkit that human operators use, but it does not change the operators themselves,” OpenAI said. “While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.”

More From TIME

[video id=A8hZ67ye autostart="viewable"]

How Anthropic Designed Itself to Avoid OpenAI’s Mistakes

Anthropic CEO Dario Amodei testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee at Dirksen Senate Office Building on Capitol Hill, in Washington, D.C., on July 25, 2023.

Last Thanksgiving, Brian Israel found himself being asked the same question again and again.

[time-brightcove not-tgx=”true”]

The general counsel at the AI lab Anthropic had been watching dumbfounded along with the rest of the tech world as, just two miles south of Anthropic’s headquarters in San Francisco, its main competitor OpenAI seemed to be imploding.

OpenAI’s board had fired CEO Sam Altman, saying he had lost their confidence, in a move that seemed likely to tank the startup’s $80 billion-plus valuation. The firing was only possible thanks to OpenAI’s strange corporate structure, in which its directors have no fiduciary duty to increase profits for shareholders—a structure Altman himself had helped design so that OpenAI could build powerful AI insulated from perverse market incentives. To many, it appeared that plan had badly backfired. Five days later, after a pressure campaign from OpenAI’s main investor Microsoft, venture capitalists, and OpenAI’s own staff—who held valuable equity in the company—Altman was reinstated as CEO, and two of the three directors who fired him resigned. “AI belongs to the capitalists now,” the New York Times concluded, as OpenAI began to build a new board that seemed more befitting of a high-growth company than a research lab concerned about the dangers of powerful AI.

And so Israel found himself being frantically asked by Anthropic’s investors and clients that weekend: Could the same thing happen at Anthropic?

Anthropic, which like OpenAI is a top AI lab, has an unorthodox corporate structure too. The company similarly structured itself in order to ensure it could develop AI without needing to cut corners in pursuit of profits. But that’s pretty much where the likeness ends. To everybody with questions on Thanksgiving, Israel’s answer was the same: what happened at OpenAI can’t happen to us.

Read More: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy

Prior to the OpenAI disaster, questions about the corporate governance of AI seemed obscure. But it’s now clear that the structure of AI companies has vital implications for who controls what could be the 21st century’s most powerful technology. As AI grows more powerful, the stakes are only getting higher. Earlier in May, two OpenAI leaders on the safety side of the company quit. In a leaving statement one of them, Jan Leike, said that safety had “taken a backseat to shiny products,” and said that OpenAI needed a “cultural change” if it were going to develop advanced AI safely. On Tuesday, Leike announced he had moved to Anthropic. (Altman acknowledged Leike’s criticisms, saying “we have a lot more to do; we are committed to doing it.”)

Anthropic prides itself on being structured differently from OpenAI, but a question mark hangs over its future. Anthropic has raised $7 billion in the last year, mostly from Amazon and Google—big tech companies that, like Microsoft and Meta, are racing to secure dominance over the world of AI. At some point it will need to raise even more. If Anthropic’s structure isn’t strong enough to withstand pressure from those corporate juggernauts, it may struggle to prevent its AI from becoming dangerous, or might allow its technology to fall into Big Tech’s hands. On the other hand, if Anthropic’s governance structure turns out to be more robust than OpenAI’s, the company may be able to chart a new course—one where AI can be developed safely, protected from the worst pressures of the free market, and for the benefit of society at large.

Anthropic’s seven co-founders all previously worked at OpenAI. In his former role as OpenAI’s vice president for research, Anthropic CEO Dario Amodei even wrote the majority of OpenAI’s charter, the document that commits the lab and its workers to pursue the safe development of powerful AI. To be sure, Anthropic’s co-founders left OpenAI in 2021, well before the problems with its structure burst into the open with Altman’s firing. But their experience made them want to do things differently. Watching the meltdown that happened last Thanksgiving made Amodei feel that Anthropic’s governance structure “was the right approach,” he tells TIME. “The way we’ve done things, with all these checks and balances, puts us in a position where it’s much harder for something like that to happen.”

From left: Paul Christiano, Dario Amodei, and Geoffrey Irving write equations on a whiteboard at OpenAI, the artificial intelligence lab founded by Elon Musk, in San Francisco, July 10, 2017.

Still, the high stakes have led many to question why novel and largely untested corporate governance structures are the primary constraint on the behavior of companies attempting to develop advanced AI. “Society must not let the roll-out of AI be controlled solely by private tech companies,” wrote Helen Toner and Tasha McCauley, two former OpenAI board members who voted to fire Altman last year, in a recent article in The Economist. “There are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.”


A ‘public benefit corporation’

Unlike OpenAI, which essentially operates as a capped-profit company governed by a nonprofit board that is not accountable to the company’s shareholders, Anthropic is structured more like a traditional company. It has a board that is accountable to shareholders, including Google and Amazon, which between them have invested some $6 billion into Anthropic. (Salesforce, where TIME co-chair and owner Marc Benioff is CEO, has made a smaller investment.) But Anthropic makes use of a special element of Delaware corporate law. It is not a limited company, but a public benefit corporation (PBC), which means that as well as having a fiduciary obligation to increase profits for shareholders, its board also has legal room to follow a separate mission: to ensure that “transformative AI helps people and society flourish.” What that essentially means is that shareholders would find it more difficult to sue Anthropic’s board if the board chose to prioritize safety over increasing profits, Israel says. 

There is no obvious mechanism, however, for the public to sue Anthropic’s board members for not pursuing its public benefit mission strongly enough. “To my knowledge, there’s no way for the public interest to sue you to enforce that,” Israel says. The PBC structure gives the board “a flexibility, not a mandate,” he says.

The conventional wisdom that venture capitalists pass on to company founders is: innovate on your product, but don’t innovate on the structure of your business. But Anthropic’s co-founders decided at the company’s founding in 2021 to disregard that advice, reasoning that if AI was as powerful as they believed it could be, the technology would require new governance structures to ensure it benefited the public. “Many things are handled very well by the market,” Amodei says. “But there are also externalities, the most obvious ones being the risks of AI models [developing] autonomy, but also national security questions, and other things like whether they break or bend the economy in ways we haven’t seen before. So I wanted to make sure that the company was equipped to handle that whole range of issues.”

Being at the “frontier” of AI development—building bigger models than have ever been built before, and which could carry unknown capabilities and risks—required extra care. “There’s a very clear economic advantage to time in the market with the best [AI] model,” Israel says. On the other hand, he says, the more time Anthropic’s safety researchers can spend testing a model after it has been trained, the more confident they can be that launching it would be safe. “The two are at least theoretically in tension,” Israel says. “It was very important to us that we not be railroaded into [launching] a model that we’re not sure is safe.”

The Long Term Benefit Trust

To Anthropic’s founders, structuring the company as a public benefit corporation was a good first step, but didn’t address the question of who should be on the company’s board. To answer this question, they decided in 2023 to set up a separate body, called the Long Term Benefit Trust (LTBT), which would ultimately gain the power to elect and fire a majority of the board.

 The LTBT, whose members have no equity in the company, currently elects one out of the board’s five members. But that number will rise to two out of five this July, and then to three out of five this November—in line with fundraising milestones that the company has now surpassed, according to Israel and a copy of Anthropic’s incorporation documents reviewed by TIME. (Shareholders with voting stock elect the remaining board members.)

The LTBT’s first five members were picked by Anthropic’s executives for their expertise in three fields that the company’s co-founders felt were important to its mission: AI safety, national security, and social enterprise. Among those selected were Jason Matheny, CEO of the RAND corporation, Kanika Bahl, CEO of development nonprofit Evidence Action, and AI safety researcher Paul Christiano. (Christiano resigned from the LTBT prior to taking a new role in April leading the U.S. government’s new AI Safety Institute, he said in an email. His seat has yet to be filled.) On Wednesday, Anthropic announced that the LTBT had elected its first member of the company’s board: Jay Kreps, the co-founder and CEO of data company Confluent.

The LTBT receives advance notice of “actions that could significantly alter the corporation or its business,” Anthropic says, and “must use its powers to ensure that Anthropic responsibly balances the financial interests of stockholders with the interests of those affected by Anthropic’s conduct and our public benefit purpose.” 

“Anthropic will continue to be overseen by its board, which we expect will make the decisions of consequence on the path to transformative AI,” the company says in a blog post on its website. But “in navigating these decisions, a majority of the board will ultimately have accountability to the Trust as well as to stockholders, and will thus have incentives to appropriately balance the public benefit with stockholder interests.”

However, even the board members who are selected by the LTBT owe fiduciary obligations to Anthropic’s stockholders, Israel says. This nuance means that the board members appointed by the LTBT could probably not pull off an action as drastic as the one taken by OpenAI’s board members last November. It’s one of the reasons Israel was so confidently able to say, when asked last Thanksgiving, that what happened at OpenAI could never happen at Anthropic. But it also means that the LTBT ultimately has a limited influence on the company: while it will eventually have the power to select and remove a majority of board members, those members will in practice face similar incentives to the rest of the board. 

Company leaders, and a former advisor, emphasize that Anthropic’s structure is experimental in nature. “Nothing exactly like this has been tried, to my knowledge,” says Noah Feldman, a Harvard Law professor who served as an outside consultant to Anthropic when the company was setting up the earliest stages of its governance structure. “Even the best designs in the world sometimes don’t work,” he adds. “But this model has been designed with a tremendous amount of thought … and I have great hopes that it will succeed.”

The Amazon and Google question

According to Anthropic’s incorporation documents, there is a caveat to the agreement governing the Long Term Benefit Trust. If a supermajority of shareholders votes to do so, they can rewrite the rules that govern the LTBT without the consent of its five members. This mechanism was designed as a “failsafe” to account for the possibility of the structure being flawed in unexpected ways, Anthropic says. But it also raises the specter that Google and Amazon could force a change to Anthropic’s corporate governance.

But according to Israel, this would be impossible. Amazon and Google, he says, do not own voting shares in Anthropic, meaning they cannot elect board members and their votes would not be counted in any supermajority required to rewrite the rules governing the LTBT. (Holders of Anthropic’s Series B stock, much of which was initially bought by the defunct cryptocurrency exchange FTX, also do not have voting rights, Israel says.) 

Google and Amazon each own less than 15% of Anthropic, according to a person familiar with the matter. Amodei emphasizes that Amazon and Google’s investments in Anthropic are not in the same ballpark as Microsoft’s deal with OpenAI, where the tech giant has an agreement to receive 49% of OpenAI’s profits until its $13 billion investment is paid back. “It’s just worlds apart,” Amodei says. He acknowledges that Anthropic will likely have to raise more money in the future, but says that the company’s ability to punch above its weight will allow it to remain competitive with better-resourced rivals. “As long as we can do more with less, then in the end, the resources are going to find their way to the innovative companies,” he tells TIME.

Still, uncomfortable tradeoffs may loom in Anthropic’s future—ones that even the most well-considered governance structure cannot solve for. “The overwhelming priority at Anthropic is to keep up at the frontier,” says Daniel Colson, the executive director of the AI Policy Institute, a non-profit research group, referring to the lab’s belief that it must train its own world-leading AI models to do good safety research on them. But what happens when Anthropic’s money runs out, and it needs more investment to keep up with the big tech companies? “I think the manifestation of the board’s fiduciary responsibility will be, ‘OK, do we have to partner with a big tech company to get capital, or swallow any other kind of potential poison pill?’” Colson says. In dealing with such an existential question for the company, Anthropic’s board might be forced to weigh total collapse against some form of compromise in order to achieve what it sees as its long-term mission.

Ultimately, Colson says, the governance of AI “is not something that any corporate governance structure is adequate for.” While he believes Anthropic’s structure is better than OpenAI’s, he says the real task of ensuring that AI is developed safely lies with governments, who must issue binding regulations. “It seems like Anthropic did a good job” on its structure, Colson says. “But are these governance structures sufficient for the development of AGI? My strong sense is definitely no—they are extremely illegitimate.”

Correction, May 30

The original version of this story mischaracterized Brian Israel’s view of the aftermath of Sam Altman’s firing. Many observers concluded that OpenAI’s corporate structure had backfired, but Israel did not say so.

How You Can Avoid Using Meta AI

In this photo illustration, a woman's silhouette holds a

SAN FRANCISCO — If you use Facebook, WhatsApp or Instagram, you’ve probably noticed a new character pop up answering search queries or eagerly offering tidbits of information in your feeds, with varying degrees of accuracy.

It’s Meta AI, and it’s here to help, at least according to Meta Platforms’ CEO Mark Zuckerberg, who calls it “the most intelligent AI assistant that you can freely use.”

[time-brightcove not-tgx=”true”]

The chatbot can recommend local restaurants, offer more information on something you see in a Facebook post, search for airline flights or generate images in the blink of an eye. If you’re chatting with friends to plan a night out, you can invite it into your group conversation by typing @MetaAI, then ask it to recommend, say, cocktail bars.

Meta’s AI tool has been integrated into chat boxes and search bars throughout the tech giant’s platforms. The assistant appears, for example, at the top of your chat list on Messenger. Ask it questions about anything or to “imagine” something and it will generate a picture or animation.

As with any new technology, there are, of course, hiccups, including bizarre exchanges when the chatbots first started engaging with real people. One joined a Facebook moms’ group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum.

Meta AI hasn’t been universally welcomed. Here are some tips if you want to avoid using it.

Can I turn Meta AI off?

Some Facebook users don’t like the chatbot, complaining in online forums that they’re tired of having AI foisted on them all the time or that they just want to stick with what they know. So what if you don’t want Meta AI butting in every time you search for something or scroll through your social feeds? Well, you might need a time machine. Meta and other tech companies are in an AI arms race, churning out new language models and persuading — some might say pressuring — the public to use them.

The bad news is there’s no one button to turn off Meta AI on Facebook, Instagram, Messenger or WhatsApp. However, if you want to limit it, there are some (imperfect) workarounds.

How to mute Meta AI

On the Facebook mobile app, tap the “search” button. You may get a prompt to “Ask Meta AI anything.” Tap the blue triangle on the right, then the blue circle with an “i” inside it. Here, you’ll see a “mute” button, with options to silence the chatbot for 15 minutes or longer, or “Until I change it.” You can do the same on Instagram.

Nonetheless, muting doesn’t get rid of Meta AI completely. Meta AI’s circle logo might still appear where the search magnifying glass used to be — and tapping on it will take you to the Meta AI field. This is now the new way to search in Meta, and just as with Google’s AI summaries, the responses will be generated by AI.

I asked the chatbot about searching Facebook without Meta AI results.

“Meta AI aims to be a helpful assistant and is in the search bar to assist with your questions,” it responded. Then it added, “You can’t disable it from this experience, but you can tap the search button after writing your query and search how you normally would.”

Then I asked a (human) Meta spokesperson. “You can search how you normally would and choose to engage with a variety of results — ones from Meta AI or others that appear as you type,” the spokesperson said in a statement. “And when interacting with Meta AI, you have access to real-time information without having to leave the app you’re using thanks to our search partnerships.”

Like an over-eager personal assistant, Meta AI also pops up under posts on your Facebook news feed, offering more information about what’s discussed in the post — such as the subject of a news article. It’s not possible to disable this feature, so you’ll just have to ignore it.

User an “old school” version of Facebook

Tech websites have noted that one surefire way to avoid Facebook’s AI assistant is to use the social network’s stripped-down mobile site, mbasic.facebook.com. It’s aimed at people in developing countries using older phones on slower internet connections. The basic site has a retro feel that looks crude compared to the current version, and it looks even worse on desktop browsers, but it still works on a rudimentary level and without

AI. in other countries

Meta AI is so far only available in the United States and 13 other countries including Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe. So if you don’t live in any of those places, you don’t have to worry about the chatbot because you don’t get to use it. At least not yet.

International Authorities Arrest Man Allegedly Behind ‘Likely the World’s Largest Botnet Ever’

Hacker using laptop. Lots of digits on the computer screen.

WASHINGTON — An international law enforcement team has arrested a Chinese national and disrupted a major botnet that officials said he ran for nearly a decade, amassing at least $99 million in profits by reselling access to criminals who used it for identity theft, child exploitation, and financial fraud, including pandemic relief scams.

The U.S. Department of Justice quoted FBI Director Christopher Wray as saying Wednesday that the “911 S5” botnet—a network of malware-infected computers in nearly 200 countries—was “likely the world’s largest.”

[time-brightcove not-tgx=”true”]

Justice said in a news release that Yunhe Wang, 35, was arrested May 24. Wang was arrested in Singapore, and search warrants were executed there and in Thailand, the FBI’s deputy assistant director for cyber operations, Brett Leatherman, said in a LinkedIn post. Authorities also seized $29 million in cryptocurrency, Leatherman said.

Read More: Influencers Are Scamming Their Fans Through Crypto. Here’s How Their Tactics Have Evolved.

Cybercriminals used Wang’s network of zombie residential computers to steal “billions of dollars from financial institutions, credit card issuers and accountholders, and federal lending programs since 2014,” according to an indictment filed in Texas’ eastern district.

The administrator, Wang, sold access to the 19 million Windows computers he hijacked—more than 613,000 in the United States—to criminals who “used that access to commit a staggering array of crimes that victimized children, threatened people’s safety and defrauded financial institutions and federal lending programs,” U.S. Attorney General Merrick Garland said in announcing the takedown.

Read More: Why Gen Z Is Surprisingly Susceptible to Financial Scams

He said criminals who purchased access to the zombie network from Wang were responsible for more than $5.9 billion in estimated losses due to fraud against relief programs. Officials estimated 560,000 fraudulent unemployment insurance claims originated from compromised IP addresses.

Wang allegedly managed the botnet through 150 dedicated servers, half of them leased from U.S.-based online service providers.

The indictment says Wang used his illicit gains to purchase 21 properties in the United States, China, Singapore, Thailand, the United Arab Emirates and St. Kitts and Nevis, where it said he obtained citizenship through investment.

In its news release, the Justice Department thanked police and other authorities in Singapore and Thailand for their assistance.

Why the ‘All Eyes on Rafah’ AI Post Is Going Viral on Social Media

Nearly 45 million Instagram users—including celebrities like Bella Hadid and Nicola Coughlan—have shared an AI-generated image depicting tent camps for displaced Palestinians and a slogan that reads “all eyes on Rafah,” according to a Wednesday afternoon count by Instagram. 

The sharing of the post comes amid criticism from the international community regarding Rafah, which rests on the southern Gaza Strip near the Egyptian border, and has been the subject of intense bombing by Israeli troops. Military strikes set shelters on fire, causing Palestinians to dig through charred remains hoping to rescue survivors. At least 45 Palestinians have been killed thus far. Rafah was previously deemed a humanitarian zone for civilians. 

[time-brightcove not-tgx=”true”]

Sarah Jackson, an associate professor at the Annenberg School for Communication at the University of Pennsylvania, tells TIME that the origins of internet activism date back to the ‘90s, when leaders behind the Zapatista uprising circulated information about what was happening on the ground. But currently, Instagram appeals to activists as a platform for social change because of the visual aspect of the app, allowing users to share both videos and photos.

More From TIME

[video id=T37vtnGD autostart="viewable"]

“One of the really important things that we have to acknowledge is that a lot of Palestinian journalists have been using Instagram to share from the ground what has been happening. We know that a lot of those journalists have been directly targeted and censored because of that, but this has been a platform that has been popular with them,” Jackson says.

Jackson points out that many social media activists may have been struggling to share images from Gaza due to algorithmic guidelines that hide graphic content. Instagram says that while it understands why people share this sort of content in certain instances, it encourages people to caption the photo with warnings about graphic violence, per its community guidelines.

Read More: Israel Continues Rafah Strikes Days After 45 Civilians Killed in Bombing

Users may have found a workaround by sharing an AI image. “Many of the images that are coming from the ground are really graphic and gruesome,” she says. “It has been harder and harder for people to actually document what’s happening…and when compelling images are documented, they are often censored at the platform level…it makes sense that folks would turn to AI.”

Instagram user @ shahv4012 first shared the “all eyes on Rafah” post on their story. Some have criticized the use of AI for the photo. “There are people who are not satisfied with the picture and template, I apologize if I have made a mistake on all of you,” the user said in an Instagram story. “Whatever [you do], don’t look down on the Rafah issue now, spread it so that they are shaken and afraid of the spread of all of us.”

The slogan on the image likely was inspired by Richard Peeperkorn, the WHO representative for Gaza, who previously said that “all eyes” were on what is happening in Rafah.

While some have pointed out that sharing the AI image does not necessarily mean a user is fully educated on what is happening in Rafah, Jackson says that if the point is to spread awareness, and share that someone is “part of a collective that cares about this issue,” then posting the photo on their story is worthwhile. 

Israel’s decision to launch its military offensive into Rafah came two days after the International Court of Justice (ICJ) ordered Israel to stop its planned assault on Rafah, and has been largely criticized by world leaders.

French President Emmanuel Macron said that he was “outraged” by the Israeli strikes in Rafah. “These operations must stop. There are no safe areas in Rafah for Palestinian civilians. I call for full respect for international law and an immediate ceasefire,” Macron shared on X on Monday. U.N. Secretary General António Guterres reiterated his call for an immediate ceasefire, and for the ICJ order to be complied with.

Israeli Prime Minister Benjamin Netanyahu called the deaths “tragic.” More than 36,000 Palestinians and some 1,500 Israelis have been killed since Hamas attacked Israel on October 7, 2023.

OpenAI Forms Safety Committee as It Starts Training Latest AI Model

Sam Altman

OpenAI says it’s setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions” for its projects and operations.

[time-brightcove not-tgx=”true”]

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.” OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the “superalignment” team focused on AI risks that they jointly led.

Leike said Tuesday he’s joining rival AI company Anthropic, founded by ex-OpenAI leaders, to “continue the superalignment mission” there.

OpenAI said it has “recently begun training its next frontier model” and its AI models lead the industry on capability and safety, though it made no mention of the controversy. “We welcome a robust debate at this important moment,” the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D’Angelo, who’s the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee’s first job will be to evaluate and further develop OpenAI’s processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it’s adopting “in a manner that is consistent with safety and security.”

[video id=xiP6HmJu autostart="viewable"]

xAI Raises $6 Billion as Elon Musk Aims to Challenge OpenAI

Elon Musk speaks at the Milken Institute's Global Conference in Beverly Hills, California, on May 6, 2024.

Elon Musk’s artificial intelligence startup xAI has raised $6 billion to accelerate its challenge to his former allies at OpenAI.

The Series B round, announced in a blog post on May 26, comes less than a year after xAI’s debut and marks one of the bigger investments in the nascent field of developing AI tools. Musk had been an early supporter of artificial intelligence, backing OpenAI before it introduced ChatGPT in late 2022.

[time-brightcove not-tgx=”true”]

He later withdrew his support from the venture and has advocated caution because of the technology’s potential dangers. He was among a large group of industry leaders urging a pause to AI development last year.

Read More: Inside Elon Musk’s Struggle for the Future of AI

Musk launched a rival to OpenAI’s ChatGPT in November, called Grok, which was trained on and integrated into X.com, the social network formerly known as Twitter. That has so far been the most visible product of xAI’s work, which is led by executives with prior experience at Alphabet Inc.’s DeepMind, Microsoft Corp. and Tesla Inc.

The company intends to use the funds to bring its first products to market, build advanced infrastructure and accelerate the development of future technologies, it said in the blog.

Pre-money valuation was $18B

— Elon Musk (@elonmusk) May 27, 2024

Its pre-money valuation was $18 billion, Musk said in a post on X. Marquee venture capital names including Sequoia Capital and Andreessen Horowitz backed the fundraising, which is one of the largest so far in the industry.

Microsoft Corp. has invested about $13 billion in OpenAI, while Amazon.com Inc. put about $4 billion into Anthropic.

❌