Normal view

There are new articles available, click to refresh the page.
Today — 25 July 2024Technology
Yesterday — 24 July 2024Technology

Why Colin Kaepernick Is Starting an AI Company

24 July 2024 at 16:00
Colin Kaepernick

When NFL quarterback Colin Kaepernick began kneeling during the national anthem to protest police brutality and racial injustice in 2016, he soon found himself out of a job, eventually moving onto other ventures in media and entertainment. Today, he’s entering the AI industry by launching a project he says he hopes will allow others to bypass “gatekeeping:” an artificial intelligence platform called Lumi.

[time-brightcove not-tgx=”true”]

The new subscription-based platform aims to provide tools for storytellers to create, illustrate, publish and monetize their ideas. The company has raised $4 million in funding led by Alexis Ohanian’s Seven Seven Six, and its product went live today, July 24.

In an interview with TIME, Kaepernick says this project can be viewed as an extension of his activism. “The majority of the world’s stories never come to life. Most people don’t have access or inroads to publishers or platforms—or they may have a gap in their skillset that’s a barrier for them to be able to create,” he says. “We’re going to see a whole new world of stories and perspectives.”

Kaepernick says that the idea for Lumi came out of challenges he faced while building his media company, Ra Vision Media, and his publishing company, Kaepernick Publishing, which included “long production timelines, high costs, and creators not having ownership over the work they create,” he says. When ChatGPT, Dall-E, and other AI models broke through to the mainstream a couple years ago, Kaepernick started playing with the tools, even trying to use them to create a children’s book. (Kaepernick penned a graphic novel, Change the Game, based on his high school experiences, last year.)

Lumi aims to help independent creators forge hybrid written-illustrated stories, like comics, graphic novels, and manga. The platform is built “on top of foundational models,” Kaepernick says—although he declined to say which ones. (Foundational models are large, multi-purpose machine learning models like Chat-GPT.) Users interact with a chatbot to create a character, flesh out their backstory and traits, and build a narrative. Then they use an image-generation tool to illustrate the character and their journey. “You can go back and forth with your AI companion and test ideas, ‘I want to change the ending,’ or ‘I want it to be more comedic or dramatic,’” he says. 

The users can then publish and distribute their stories right on the Lumi platform, order physical copies, and use AI tools to create and sell merchandise based on their IP. Kaepernick hopes that the platform will appeal to aspiring creators with gaps in their skill sets—whether that means athletes who have a story and an audience but lack illustrating chops, or content creators who are having trouble monetizing their work.

“We talked to hundreds of creators and asked what their pain points were,” he says. “Some were trying to fundraise money to get projects off the ground. Others don’t know how to actually enter the space, or don’t have a pathway or have been rejected. And other creators didn’t want to handle the logistics of fundraising and manufacturing and project management and distribution. We hope that this creates a path for people to actually thrive off of the creativity that they’re bringing to the world.” 

Read More: Colin Kaepernick, TIME Person of the Year 2017, The Short List

Lumi will give creators full ownership of the works they create on the platform, Kaepernick says. When asked about how the company might deal with works that are created on Lumi but are alleged to have infringed on pre-existing copyrights, Kaepernick responded: “We’re going to build on the foundational models, and we’re going to let the legislators and everybody figure out what the laws and parameters are going to be.”

Kaepernick is well aware that there is significant mistrust and criticisms within creative industries about the rise of AI and its potential to take away jobs. Spike Lee, for instance, who signed on to direct an upcoming documentary about Kaepernick, said in a February interview that “the danger that AI could do to cinemas is nothing compared to what it could do to the world.” Concerns about AI were also at the center of the Hollywood strikes last year. 

“I understand the concerns,” Kaepernick says. “The creators have to be in the driver’s seat. This is another tool for them to be able to hopefully create in a better, more effective way, and that gives them freedom to create stories that they wanted to but couldn’t before.” Kaepernick compares these new AI tools to the iPhone’s impact on allowing a much larger swath of people to experiment with photography. “We saw a whole new world of photography and photos,” he adds. “But that didn’t eliminate traditional photographers or their craft and expertise. We look at this in a similar way.”

Kaepernick’s team includes engineers formerly at Apple (Stefan Dasbach) and Reflex AI (Sam Fazel). A representative for Lumi declined to disclose the monthly price of the platform. Creators can begin signing up for the beta version on July 24.

Mark Zuckerberg Just Intensified the Battle for AI’s Future

24 July 2024 at 15:45
Meta CEO Mark Zuckerberg

The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?

On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.

[time-brightcove not-tgx=”true”]

At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.

Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.

In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 

But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”

The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.

Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.

But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”

Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”

Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”

Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”

AI Testing Mostly Uses English Right Now. That’s Risky

24 July 2024 at 11:00
In this photo illustration, the home page of the ChatGPT

Over the last year, governments, academia, and industry have invested considerable resources into investigating the harms of advanced AI. But one massive factor seems to be continuously overlooked: right now, AI’s primary tests and models are confined to English.

Advanced AI could be used in many languages to cause harm, but focusing primarily on English may leave us with only part of the answer. It also ignores those most vulnerable to its harms.

[time-brightcove not-tgx=”true”]

After the release of ChatGPT in November, 2022, AI developers expressed surprise at a capability displayed by the model: It could “speak” at least 80 languages, not just English. Over the last year, commentators have pointed out that GPT-4 outperforms Google Translate in dozens of languages. But this focus on English for testing leaves open the possibility that the evaluations may be neglecting capabilities of AI models that become more relevant for other languages. 

As half the world steps out to the ballot box this year, experts have echoed concerns about the capacity of AI systems to not only be “misinformation superspreaders,” but also its ability to threaten the integrity of elections. The threats here range from “deepfakes and voice cloning” to “identity manipulation and AI-produced fake news.” The recent release of “multi-models”—AI systems which can also speak, see, and hear everything you do—such as GPT-4o and Gemini Live by tech giants OpenAI and Google, seem poised to make this threat even worse. And yet, virtually all discussions on policy, including May’s historic AI Safety Summit in Seoul and the release of the long-anticipated AI Roadmap in the U.S. Senate, neglect non-English languages.

This is not just an issue of leaving some languages out over others. In the U.S., research has consistently demonstrated that English-as-a-Second-Language (ESL) communities, in this context predominantly Spanish-speaking, are more vulnerable to misinformation than English-as-a-Primary-Language (EPL) communities. Such results have been replicated for cases involving migrants generally, both in the United States and in Europe where refugees have been effective targets—and subjects—of these campaigns. To make matters worse, content moderation guardrails on social media sites—a likely fora for where such AI-generated falsehoods would proliferate—are heavily biased towards English. While 90% of Facebook’s users are outside the U.S. and Canada, the company’s content moderators just spent 13% of their working hours focusing on misinformation outside the U.S. The failure of social-media platforms to moderate hate speech in Myanmar, Ethiopia, and other countries in embroiled in conflict and instability further betrays the language gap in these efforts.

Even as policymakers, corporate executives and AI experts prepare to combat AI-generated misinformation, their efforts cast a shadow over those most likely to be targeted and vulnerable to such false campaigns, including immigrants and those living in the Global South.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

This discrepancy is even more concerning when it comes to the potential of AI systems to cause mass human casualties, for instance, by being employed to develop and launch a bio-weapon. In 2023, experts expressed fear that large language models (LLMs) could be used to synthesize and deploy pathogens of potential pandemic potential. Since then, a multitude of research papers investigated this problem have been published both from within and outside industry. A common finding of these reports is that the current generation of AI systems are as good as and not better than search engines like Google in providing malevolent actors with hazardous information that could be use to build bio-weapons. Research by leading AI company OpenAI yielded this finding in January 2024, followed by a report by the RAND Corporation which showed a similar result.

What is astonishing about these studies is the near-complete absence of testing in non-English languages. This is especially perplexing as most Western efforts to combat non-state actors are concentrated in regions of the world where English is rarely spoken as first language. The claim here is not that Pashto, Arabic, Russian, or other languages may yield more dangerous results than in English. The claim, instead, is simply that using these languages is a capability jump for non-state actors that are better versed in non-English languages.

Read More: How English’s Global Dominance Fails Us

LLMs are often better translators than traditional services. It is much easier for a terrorist to simply input their query into a LLM in a language of their choice and directly receive an answer in that language. The counterfactual point here, however, is relying on clunky search engines in their own language, using Google for their language queries (which often only yields results published on the internet in their language), or go through an arduous process of translation and re-translation to get English language information with the possibility of meanings being lost. Hence, AI systems are making non-state actors just as good as if they spoke fluent English. How much better that makes them is something we will find out in the months to come.

This notion—that advanced AI systems may provide results in any language as good as if asked in English—has a wide range of applications. Perhaps the most intuitive example here is “spearphishing,” targeting specific individuals using manipulative techniques to secure information or money from them. Since the popularization of the “Nigerian Prince” scam, experts posit a basic rule-of-thumb to protect yourself: If the message seems to be written in broken English with improper grammar chances, it’s a scam. Now such messages can be crafted by those who have no experience of English, simply by typing their prompt in their native language and receiving a fluent response in English. To boot,  this says nothing about how much AI systems may boost scams where the same non-English language is used in input and output.

It is clear that the “language question” in AI is of paramount importance, and there is much that can be done. This includes new guidelines and requirements for testing AI models from government and academic institutions, and pushing companies to develop new benchmarks for testing which may be less operable in non-English languages. Most importantly, it is vital that immigrants and those in the Global South be better integrated into these efforts. The coalitions working to keep the world safe from AI must start looking more like it.

Before yesterdayTechnology

Silicon Valley Leaders Have Taken to Donald Trump. Could Kamala Harris Win Them Over?

22 July 2024 at 17:05
Kamala Harris

The deep-pocketed tech industry of Silicon Valley has historically voted for Democrats. But in the last month, a cadre of tech executives has risen up for Donald Trump, both on the grounds that he will be friendlier to the industry and that President Joe Biden was unfit to serve a second term. 

But now that Biden has dropped out of the race and the Democratic Party seems to be coalescing around Kamala Harris, a battle for Silicon Valley’s affection—and donations—could ensue. Harris is from Oakland, and many people perceived her tenure as California’s attorney general as favorable toward the tech industry. Now Silicon Valley appears to be split—and debates will play out both on social media and in tech offices for the months to come. 

[time-brightcove not-tgx=”true”]

Trump is backed by Elon, other major tech leaders

It would take a seismic shift for Silicon Valley to actually turn red. In 2020, Santa Clara County, which contains most of Silicon Valley, voted 73 percent for Biden and 25 percent for Trump. (The 2016 numbers were very similar.)And a recent WIRED analysis of campaign contributions found that the venture industry seems to actually be donating to Democrats at a higher rate this cycle than in years past. 

But some of the most influential voices in tech have loudly thrown their lot in with Trump, especially since his assassination attempt. Elon Musk and his associate David Sacks have been active on social media in rallying support among tech executives and have been pumping millions into a Super PAC for Trump’s campaign. 

The crypto industry, in particular, has embraced Trump, who is scheduled to speak at a Bitcoin conference this weekend. Marc Andreessen, the co-founder of the prominent VC firm a16z, has denounced the Biden administration’s more aggressive approach to tech and crypto regulation, and said that he is backing Trump after supporting Democrats through most election cycles, including in 2016.

And many tech moguls have been further energized by Trump’s vice presidential pick of J.D Vance, who has deep Silicon Valley ties, including working for Peter Thiel. Sacks and the tech investor Chamath Palihapitiya even personally lobbied Trump to pick Vance at a $300,000-a-person dinner, the New York Times reported

Read More: ​​How the Crypto World Learned to Love Donald Trump, J.D. Vance, and Project 2025

But Harris has a long history with Silicon Valley

But Harris’s history with Silicon Valley could stem the tide. In recent months, many Silicon Valley Democrats sat on the sidelines as Biden’s campaign lost steam: the entrepreneur and venture capitalist Reid Hoffman told WIRED that tech mega-donors had been withholding their donations due to the “turmoil.” But Hoffman sprang back into action following Biden’s exit, calling Harris “the right person at the right time.” Many others immediately joined him: Harris raised over $50 million in less than 24 hours after Biden’s announcement. 

Hoffman is one of many Silicon Valley powerhouses who supported Harris during her 2020 presidential campaign, due to her connections with the industry stemming from her time as California’s attorney general. Her 2020 donors included Salesforce co-founder and CEO Marc Benioff (who, with Lynne Benioff, is the owner and co-chair of TIME), Amazon general counsel David Zapolsky, and Microsoft president Brad Smith.

Some observers, in turn, argued that Harris was too favorable to the industry while attorney general. Her time as AG was marked by a mass consolidation in tech towards a few hyper-power companies, which critics argue she did little to stop. In 2012, she forged an agreement with Big Tech titans over privacy protections for smartphone owners, which was largely cheered by the industry. The following year, she participated in the marketing campaign for Sheryl Sandberg’s Lean In while being the law enforcement official responsible for overseeing Facebook. 

In contrast, she did wield her position to take an active role in pressuring platforms to ban revenge pornography. And the Biden administration has actually been marked by a hostile relationship with Big Tech, with Biden appointee Lina Khan attempting to use her position at the FTC to break up monopolies. (In a strange twist, J.D. Vance has expressed approval of Khan’s efforts to rein in Big Tech.) Given this trajectory, it’s unclear how friendly Harris will be to the tech industry if she were to assume power. 

“Kamala Harris built very close ties to the California-centric Big Tech industry, but much has changed in the last four years,” says Jeff Hauser, the executive director of the Revolving Door Project. “So it’ll be a question of: was she deeply committed to Big Tech, or was that just kind of like, a home state Senator with a home state industry taking the easy way out?” 

Some tech execs want an open convention

Then there are those in tech leadership who want to support a Democratic candidate, but are calling for the Democrats to select someone who might have a wider appeal to their industry. Aaron Levie, the CEO of Box, wrote on X that following Biden stepping down, the Democrats could gain votes by becoming the party that is “wildly pro tech, trade, entrepreneurship, immigration, AI.”

Reed Hastings, the executive chairman of Netflix, wrote on X that Democratic delegates “need to pick a swing state winner.” The venture capitalist Vinod Khosla agreed—and said that although he believed Harris could beat Trump, he called for an open convention. “I want an open process at the convention and not a coronation,” he wrote. “The key still is who can best beat Trump above all other priorities.”

How to Protect Yourself From Scams Following the CrowdStrike Microsoft IT Outage

21 July 2024 at 15:38

The Microsoft IT Outage that impacted services worldwide on Friday was caused by a software update by third-party cybersecurity technology company CrowdStrike.

According to Microsoft, the outage—which continues to cause disruption—affected 8.5 million Windows devices. Though they note that this is less than one percent of all Windows machines, the outage crashed systems worldwide, with online banking portals and air travel among the services impacted.

[time-brightcove not-tgx=”true”]

The outage was not caused by a cyberattack, but concern has since grown from both CrowdStrike and government-affiliated agencies as to how scammers are capitalizing on the outage and the resulting confusion surrounding malicious cyber activity.

America’s Cyber Defense Agency, the U.K.’s National Cyber Security Centre, and Australia’s National Anti-Scam Centre are among the organizations to issue warnings for consumers to be wary of scams at this time.

Read More: CrowdStrike’s Role In the Microsoft IT Outage, Explained

According to CrowdStrike’s blog, a “likely eCrime actor is using file names capitalizing on July 19, 2024,” specifically utilizing a malicious ZIP archive named “” to take data from customers.

[video id=QT1Veajr autostart="viewable"]

Here is how you can protect yourself from scammers as disruptions from the outage continue to unfold.

Be alert

You’ve already begun this first step. Be aware of phishing scams that have cropped up to capitalize on the CrowdStrike outage and do not download zip-files or software from unknown sources claiming to help with the outage.

When receiving requests for personal information by unknown numbers, be aware, and never share sensitive information to unverified sources.

The U.K’s National Cyber Security Centre has a robust guidance sheet for how organizations and businesses can protect their employees from phishing. This guidance includes four layers of mitigation tactics, from employing anti-spoofing controls to ensuring employees are aware of what phishing looks like and the tactics used to trick users into handing over information or making unauthorized payments.

Go straight to official websites

David Brumley, professor of electrical and computer engineering at Carnegie Mellon University, tells TIME he has seen a few different kinds of scam tactics over the weekend. The most prominent of these include malicious actors pretending to be CrowdStrike, offering to help businesses after the outage. He’s also noticed scammers pretending to be airlines and other organizations, again pretending to offer help to those impacted. The best course of action, Brumley notes, is always to contact business representatives directly.

“If you get a text that purports to be from one of [these businesses] and you feel uncomfortable, always just call them directly,” Brumley says.

CrowdStrike has its own “Remediation and Guidance Hub” on its blog to help those affected, and Microsoft also has its own support page.

Be sure to contact these companies via their official pages and help desks, rather than by responding to texts or emails claiming to be sent from the companies or affiliated parties.

Don’t rush

According to Catriona Lowe, deputy chair of the Australian Competition & Consumer Commission, these scammers often create “a sense of urgency that you need to do what they say to protect your computer and your financial information.” 

The best way to combat this is to slow down and ensure that you are not giving out personal details over text and email, especially to unverified sources.

Report the scam

Different countries have designated websites where you can report scams. In Australia, people can head to Scamwatch for further help. In the U.K., those impacted or concerned can send an email to Meanwhile, in the U.S., people can report instances of fraud via the Federal Trade Commission.

Check in with vulnerable friends and family members

According to the U.S. National Institute of Aging, older adults—defined generally as those above the age of 65—are often the target of scams. When possible, check in with older friends and family to ensure that they have the above tools and are aware of the rise in phishing scams as a result of the outage.

Clare O’Neil, Australia’s Minister for Home Affairs and Minister for Cyber Security, has also pointed out the need to protect those most vulnerable to falling victim to scams. In a series of posts shared on X (formerly Twitter) she said: “It is very important that Australians are extremely cautious of any unexpected texts, calls or emails claiming to be assistance with this issue.” She continued by specifying that people can help by “making sure vulnerable people, including elderly relatives, are being extra cautious at this time.”

What to Know About the Kids Online Safety Act and Its Chances of Passing

21 July 2024 at 13:31
Congress Kids Online Safety

The last time Congress passed a law to protect children on the internet was in 1998 — before Facebook, before the iPhone and long before today’s oldest teenagers were born. Now, a bill aiming to protect kids from the harms of social media, gaming sites and other online platforms appears to have enough bipartisan support to pass, though whether it actually will remains uncertain.

[time-brightcove not-tgx=”true”]

Supporters, however, hope it will come to a vote later this month.

Proponents of the Kids Online Safety Act include parents’ groups and children’s advocacy organizations as well as companies like Microsoft, X and Snap. They say the bill is a necessary first step in regulating tech companies and requiring them to protect children from dangerous online content and take responsibility for the harm their platforms can cause.

Opponents, however, fear KOSA would violate the First Amendment and harm vulnerable kids who wouldn’t be able to access information on LGBTQ issues or reproductive rights — although the bill has been revised to address many of those concerns, and major LGBTQ groups have decided to support of the proposed legislation.

Here is what to know about KOSA and the likelihood of it going into effect.

What would KOSA do?

If passed, KOSA would create a “duty of care” — a legal term that requires companies to take reasonable steps to prevent harm — for online platforms minors will likely use.

They would have to “prevent and mitigate” harms to children, including bullying and violence, the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products such as narcotics, tobacco or alcohol.

Social media platforms would also have to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. They would also be required to limit other users from communicating with children and limit features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards. In general, online platforms would have to default to the safest settings possible for accounts it believes belong to minors.

“So many of the harms that young people experience online and on social media are the result of deliberate design choices that these companies make,” said Josh Golin, executive director of Fairplay, a nonprofit working to insulate children from commercialization, marketing and harms from Big Tech.

How would it be enforced?

An earlier version of the bill empowered state attorneys general to enforce KOSA’s “duty of care” provision but after concerns from LGBTQ groups and others who worried they could use this to censor information about LGBTQ or reproductive issues. In the updated version, state attorneys general can still enforce other provisions but not the “duty of care” standard.

Broader enforcement would fall to the Federal Trade Commission, which would have oversight over what types of content is “harmful” to children.

Who supports it?

KOSA is supported a broad range of nonprofits, tech accountability and parent groups and pediatricians such as the American Academy of Pediatrics, the American Federation of Teachers, Common Sense Media, Fairplay, The Real Facebook Oversight Board and the NAACP. Some prominent tech companies, including Microsoft, X and Snap, have also signed on. Meta Platforms, which owns Facebook, Instagram and WhatsApp, has not come out in firm support or opposition of the bill, although it has said in the past that it supports the regulation of social media.

ParentSOS, a group of some 20 parents who have lost children to harm caused by social media, has also been campaigning for the bill’s passage. One of those parents is Julienne Anderson, whose 17-year-old daughter died in 2022 after purchasing tainted drugs through Instagram.

“We should not bear the entire responsibility of keeping our children safe online,” she said. “Every other industry has been regulated. And I’m sure you’ve heard this all the time. From toys to movies to music to, cars to everything. We have regulations in place to keep our children safe. And this, this is a product that they have created and distributed and yet over all these years, since the ’90s, there hasn’t been any legislation regulating the industry.”

KOSA was introduced in 2022 by Senators Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn. It currently has 68 cosponsors in the Senate, from across the political spectrum, which would be enough to pass if it were brought to a vote.

Who opposes it?

The ACLU, the Electronic Frontier Foundation and other groups supporting free speech are concerned it would violate the First Amendment. Even with the revisions that stripped state attorneys general from enforcing its duty of care provision, EFF calls it a “dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like.”

Kate Ruane, director of the Free Expression Project at the nonprofit Center for Democracy and Technology, said she remains concerned that the bill’s care of duty provision can be “misused by politically motivated actors to target marginalized communities like the LGBTQ population and just politically divisive information generally,” to try to suppress information because someone believes it is harmful to kids’ mental health.

She added that while these worries remain, there has been progress in reducing concerns.

The bigger issue, though, she added, is that platforms don’t want to get sued for showing minors content that could be “politically divisive,” so to make sure this doesn’t happen they could suppress such topics — about abortion or transgender healthcare or even the wars in Gaza or Ukraine.

Sen. Rand Paul, R-K.Y., has also expressed opposition to the bill. Paul said the bill “could prevent kids from watching PGA golf or the Super Bowl on social media because of gambling and beer ads, those kids could just turn on the TV and see those exact same ads.”

He added he has “tried to work with the authors to fix the bill’s many deficiencies. If the authors are not interested in compromise, Senator (Chuck) Schumer can bring the bill to the floor, as he could have done from the beginning.”

Will it pass Congress?

Golin said he is “very hopeful” that the bill will come to a vote in July.

“The reason it has it has not come to a vote yet is that passing legislation is really hard, particularly when you’re trying to regulate one of the, if not the most powerful industry in the world,” he said. “We are outspent.”

Golin added he thinks there’s a “really good chance” and he remains very hopeful that it will get passed.

Senate Majority Leader Schumer, D-N.Y., who has come out in support of KOSA, would have to bring it to a vote.

Schumer has backed the legislation but has not yet set aside floor time to pass it. Because there are objections to the legislation, it would take a week or more of procedural votes before a final vote.

He said on the floor last week that passing the bill is a “top priority” but that it had not yet moved because of the objections.

“Sadly, a few of our colleagues continue to block these bills without offering any constructive ideas for how to revise the text,” he said. “So now we must look ahead, and all options are on the table.”