Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 July 2024Tech – TIME

Why Colin Kaepernick Is Starting an AI Company

24 July 2024 at 16:00
Colin Kaepernick

When NFL quarterback Colin Kaepernick began kneeling during the national anthem to protest police brutality and racial injustice in 2016, he soon found himself out of a job, eventually moving onto other ventures in media and entertainment. Today, he’s entering the AI industry by launching a project he says he hopes will allow others to bypass “gatekeeping:” an artificial intelligence platform called Lumi.

[time-brightcove not-tgx=”true”]

The new subscription-based platform aims to provide tools for storytellers to create, illustrate, publish and monetize their ideas. The company has raised $4 million in funding led by Alexis Ohanian’s Seven Seven Six, and its product went live today, July 24.

In an interview with TIME, Kaepernick says this project can be viewed as an extension of his activism. “The majority of the world’s stories never come to life. Most people don’t have access or inroads to publishers or platforms—or they may have a gap in their skillset that’s a barrier for them to be able to create,” he says. “We’re going to see a whole new world of stories and perspectives.”

Kaepernick says that the idea for Lumi came out of challenges he faced while building his media company, Ra Vision Media, and his publishing company, Kaepernick Publishing, which included “long production timelines, high costs, and creators not having ownership over the work they create,” he says. When ChatGPT, Dall-E, and other AI models broke through to the mainstream a couple years ago, Kaepernick started playing with the tools, even trying to use them to create a children’s book. (Kaepernick penned a graphic novel, Change the Game, based on his high school experiences, last year.)

Lumi aims to help independent creators forge hybrid written-illustrated stories, like comics, graphic novels, and manga. The platform is built “on top of foundational models,” Kaepernick says—although he declined to say which ones. (Foundational models are large, multi-purpose machine learning models like Chat-GPT.) Users interact with a chatbot to create a character, flesh out their backstory and traits, and build a narrative. Then they use an image-generation tool to illustrate the character and their journey. “You can go back and forth with your AI companion and test ideas, ‘I want to change the ending,’ or ‘I want it to be more comedic or dramatic,’” he says. 

The users can then publish and distribute their stories right on the Lumi platform, order physical copies, and use AI tools to create and sell merchandise based on their IP. Kaepernick hopes that the platform will appeal to aspiring creators with gaps in their skill sets—whether that means athletes who have a story and an audience but lack illustrating chops, or content creators who are having trouble monetizing their work.

“We talked to hundreds of creators and asked what their pain points were,” he says. “Some were trying to fundraise money to get projects off the ground. Others don’t know how to actually enter the space, or don’t have a pathway or have been rejected. And other creators didn’t want to handle the logistics of fundraising and manufacturing and project management and distribution. We hope that this creates a path for people to actually thrive off of the creativity that they’re bringing to the world.” 

Read More: Colin Kaepernick, TIME Person of the Year 2017, The Short List

Lumi will give creators full ownership of the works they create on the platform, Kaepernick says. When asked about how the company might deal with works that are created on Lumi but are alleged to have infringed on pre-existing copyrights, Kaepernick responded: “We’re going to build on the foundational models, and we’re going to let the legislators and everybody figure out what the laws and parameters are going to be.”

Kaepernick is well aware that there is significant mistrust and criticisms within creative industries about the rise of AI and its potential to take away jobs. Spike Lee, for instance, who signed on to direct an upcoming documentary about Kaepernick, said in a February interview that “the danger that AI could do to cinemas is nothing compared to what it could do to the world.” Concerns about AI were also at the center of the Hollywood strikes last year. 

“I understand the concerns,” Kaepernick says. “The creators have to be in the driver’s seat. This is another tool for them to be able to hopefully create in a better, more effective way, and that gives them freedom to create stories that they wanted to but couldn’t before.” Kaepernick compares these new AI tools to the iPhone’s impact on allowing a much larger swath of people to experiment with photography. “We saw a whole new world of photography and photos,” he adds. “But that didn’t eliminate traditional photographers or their craft and expertise. We look at this in a similar way.”

Kaepernick’s team includes engineers formerly at Apple (Stefan Dasbach) and Reflex AI (Sam Fazel). A representative for Lumi declined to disclose the monthly price of the platform. Creators can begin signing up for the beta version on July 24.

Mark Zuckerberg Just Intensified the Battle for AI’s Future

24 July 2024 at 15:45
Meta CEO Mark Zuckerberg

The tech industry is currently embroiled in a heated debate over the future of AI: should powerful systems be open-source and freely accessible, or closed and tightly monitored for dangers?

On Tuesday, Meta CEO Mark Zuckerberg fired a salvo into this ongoing battle, publishing not just a new series of powerful AI models, but also a manifesto forcefully advocating for the open-source approach. The document, which was widely praised by venture capitalists and tech leaders like Elon Musk and Jack Dorsey, serves as both a philosophical treatise and a rallying cry for proponents of open-source AI development. It arrives as intensifying global efforts to regulate AI have galvanized resistance from open-source advocates, who see some of those potential laws as threats to innovation and accessibility.

[time-brightcove not-tgx=”true”]

At the heart of Meta’s announcement on Tuesday was the release of its latest generation of Llama large language models, the company’s answer to ChatGPT. The biggest of these new models, Meta claims, is the first open-source large language model to reach the so-called “frontier” of AI capabilities.

Meta has taken on a very different strategy with AI compared to its competitors OpenAI, Google DeepMind and Anthropic. Those companies sell access to their AIs through web browsers or interfaces known as APIs, a strategy that allows them to protect their intellectual property, monitor the use of their models, and bar bad actors from using them. By contrast, Meta has chosen to open-source the “weights,” or the underlying neural networks, of its Llama models—meaning they can be freely downloaded by anybody and run on their own machines. That strategy has put Meta’s competitors under financial pressure, and has won it many fans in the software world. But Meta’s strategy has also been criticized by many in the field of AI safety, who warn that open-sourcing powerful AI models has already led to societal harms like deepfakes, and could in future open a Pandora’s box of worse dangers.

In his manifesto, Zuckerberg argues most of those concerns are unfounded and frames Meta’s strategy as a democratizing force in AI development. “Open-source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society,” he writes. “It will make the world more prosperous and safer.” 

But while Zuckerberg’s letter presents Meta as on the side of progress, it is also a deft political move. Recent polling suggests that the American public would welcome laws that restrict the development of potentially-dangerous AI, even if it means hampering some innovation. And several pieces of AI legislation around the world, including the SB1047 bill in California, and the ENFORCE Act in Washington, D.C., would place limits on the kinds of systems that companies like Meta can open-source, due to safety concerns. Many of the venture capitalists and tech CEOs who celebrated Zuckerberg’s letter after its publication have in recent weeks mounted a growing campaign to shape public opinion against regulations that would constrain open-source AI releases. “This letter is part of a broader trend of some Silicon Valley CEOs and venture capitalists refusing to take responsibility for damages their AI technology may cause,” says Andrea Miotti, the executive director of AI safety group Control AI. “Including catastrophic outcomes.”

The philosophical underpinnings for Zuckerberg’s commitment to open-source, he writes, stem from his company’s long struggle against Apple, which via its iPhone operating system constrains what Meta can build, and which via its App Store takes a cut of Meta’s revenue. He argues that building an open ecosystem—in which Meta’s models become the industry standard due to their customizability and lack of constraints—will benefit both Meta and those who rely on its models, harming only rent-seeking companies who aim to lock in users. (Critics point out, however, that the Llama models, while more accessible than their competitors, still come with usage restrictions that fall short of true open-source principles.) Zuckerberg also argues that closed AI providers have a business model that relies on selling access to their systems—and suggests that their concerns about the dangers of open-source, including lobbying governments against it, may stem from this conflict of interest.

Addressing worries about safety, Zuckerberg writes that open-source AI will be better at addressing “unintentional” types of harm than the closed alternative, due to the nature of transparent systems being more open to scrutiny and improvement. “Historically, open-source software has been more secure for this reason,” he writes. As for intentional harm, like misuse by bad actors, Zuckerberg argues that “large-scale actors” with high compute resources, like companies and governments, will be able to use their own AI to police “less sophisticated actors” misusing open-source systems. “As long as everyone has access to similar generations of models—which open-source promotes—then governments and institutions with more compute resources will be able to check bad actors with less compute,” he writes.

But “not all ‘large actors’ are benevolent,” says Hamza Tariq Chaudhry, a U.S. policy specialist at the Future of Life Institute, a nonprofit focused on AI risk. “The most authoritarian states will likely repurpose models like Llama to perpetuate their power and commit injustices.” Chaudhry, who is originally from Pakistan, adds: “Coming from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.”

Zuckerberg’s argument also doesn’t address a central worry held by many people concerned with AI safety: the risk that AI could create an “offense-defense asymmetry,” or in other words strengthen attackers while doing little to strengthen defenders. “Zuckerberg’s statements showcase a concerning disregard for basic security in Meta’s approach to AI,” says Miotti, the director of Control AI. “When dealing with catastrophic dangers, it’s a simple fact that offense needs only to get lucky once, but defense needs to get lucky every time. A virus can spread and kill in days, while deploying a treatment can take years.”

Later in his letter, Zuckerberg addresses other worries that open-source AI will allow China to gain access to the most powerful AI models, potentially harming U.S. national security interests. He says he believes that closing off models “will not work and will only disadvantage the U.S. and its allies.” China is good at espionage, he argues, adding that “most tech companies are far from” the level of security that would prevent China from being able to steal advanced AI model weights. “It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities,” he writes. “Plus, constraining American innovation to closed development increases the chance that we don’t lead at all.”

Miotti is unimpressed by the argument. “Zuckerberg admits that advanced AI technology is easily stolen by hostile actors,” he says, “but his solution is to just give it to them for free.”

AI Testing Mostly Uses English Right Now. That’s Risky

24 July 2024 at 11:00
In this photo illustration, the home page of the ChatGPT

Over the last year, governments, academia, and industry have invested considerable resources into investigating the harms of advanced AI. But one massive factor seems to be continuously overlooked: right now, AI’s primary tests and models are confined to English.

Advanced AI could be used in many languages to cause harm, but focusing primarily on English may leave us with only part of the answer. It also ignores those most vulnerable to its harms.

[time-brightcove not-tgx=”true”]

After the release of ChatGPT in November, 2022, AI developers expressed surprise at a capability displayed by the model: It could “speak” at least 80 languages, not just English. Over the last year, commentators have pointed out that GPT-4 outperforms Google Translate in dozens of languages. But this focus on English for testing leaves open the possibility that the evaluations may be neglecting capabilities of AI models that become more relevant for other languages. 

As half the world steps out to the ballot box this year, experts have echoed concerns about the capacity of AI systems to not only be “misinformation superspreaders,” but also its ability to threaten the integrity of elections. The threats here range from “deepfakes and voice cloning” to “identity manipulation and AI-produced fake news.” The recent release of “multi-models”—AI systems which can also speak, see, and hear everything you do—such as GPT-4o and Gemini Live by tech giants OpenAI and Google, seem poised to make this threat even worse. And yet, virtually all discussions on policy, including May’s historic AI Safety Summit in Seoul and the release of the long-anticipated AI Roadmap in the U.S. Senate, neglect non-English languages.

This is not just an issue of leaving some languages out over others. In the U.S., research has consistently demonstrated that English-as-a-Second-Language (ESL) communities, in this context predominantly Spanish-speaking, are more vulnerable to misinformation than English-as-a-Primary-Language (EPL) communities. Such results have been replicated for cases involving migrants generally, both in the United States and in Europe where refugees have been effective targets—and subjects—of these campaigns. To make matters worse, content moderation guardrails on social media sites—a likely fora for where such AI-generated falsehoods would proliferate—are heavily biased towards English. While 90% of Facebook’s users are outside the U.S. and Canada, the company’s content moderators just spent 13% of their working hours focusing on misinformation outside the U.S. The failure of social-media platforms to moderate hate speech in Myanmar, Ethiopia, and other countries in embroiled in conflict and instability further betrays the language gap in these efforts.

Even as policymakers, corporate executives and AI experts prepare to combat AI-generated misinformation, their efforts cast a shadow over those most likely to be targeted and vulnerable to such false campaigns, including immigrants and those living in the Global South.

Read More: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

This discrepancy is even more concerning when it comes to the potential of AI systems to cause mass human casualties, for instance, by being employed to develop and launch a bio-weapon. In 2023, experts expressed fear that large language models (LLMs) could be used to synthesize and deploy pathogens of potential pandemic potential. Since then, a multitude of research papers investigated this problem have been published both from within and outside industry. A common finding of these reports is that the current generation of AI systems are as good as and not better than search engines like Google in providing malevolent actors with hazardous information that could be use to build bio-weapons. Research by leading AI company OpenAI yielded this finding in January 2024, followed by a report by the RAND Corporation which showed a similar result.

What is astonishing about these studies is the near-complete absence of testing in non-English languages. This is especially perplexing as most Western efforts to combat non-state actors are concentrated in regions of the world where English is rarely spoken as first language. The claim here is not that Pashto, Arabic, Russian, or other languages may yield more dangerous results than in English. The claim, instead, is simply that using these languages is a capability jump for non-state actors that are better versed in non-English languages.

Read More: How English’s Global Dominance Fails Us

LLMs are often better translators than traditional services. It is much easier for a terrorist to simply input their query into a LLM in a language of their choice and directly receive an answer in that language. The counterfactual point here, however, is relying on clunky search engines in their own language, using Google for their language queries (which often only yields results published on the internet in their language), or go through an arduous process of translation and re-translation to get English language information with the possibility of meanings being lost. Hence, AI systems are making non-state actors just as good as if they spoke fluent English. How much better that makes them is something we will find out in the months to come.

This notion—that advanced AI systems may provide results in any language as good as if asked in English—has a wide range of applications. Perhaps the most intuitive example here is “spearphishing,” targeting specific individuals using manipulative techniques to secure information or money from them. Since the popularization of the “Nigerian Prince” scam, experts posit a basic rule-of-thumb to protect yourself: If the message seems to be written in broken English with improper grammar chances, it’s a scam. Now such messages can be crafted by those who have no experience of English, simply by typing their prompt in their native language and receiving a fluent response in English. To boot,  this says nothing about how much AI systems may boost scams where the same non-English language is used in input and output.

It is clear that the “language question” in AI is of paramount importance, and there is much that can be done. This includes new guidelines and requirements for testing AI models from government and academic institutions, and pushing companies to develop new benchmarks for testing which may be less operable in non-English languages. Most importantly, it is vital that immigrants and those in the Global South be better integrated into these efforts. The coalitions working to keep the world safe from AI must start looking more like it.

Before yesterdayTech – TIME

Silicon Valley Leaders Have Taken to Donald Trump. Could Kamala Harris Win Them Over?

22 July 2024 at 17:05
Kamala Harris

The deep-pocketed tech industry of Silicon Valley has historically voted for Democrats. But in the last month, a cadre of tech executives has risen up for Donald Trump, both on the grounds that he will be friendlier to the industry and that President Joe Biden was unfit to serve a second term. 

But now that Biden has dropped out of the race and the Democratic Party seems to be coalescing around Kamala Harris, a battle for Silicon Valley’s affection—and donations—could ensue. Harris is from Oakland, and many people perceived her tenure as California’s attorney general as favorable toward the tech industry. Now Silicon Valley appears to be split—and debates will play out both on social media and in tech offices for the months to come. 

[time-brightcove not-tgx=”true”]

Trump is backed by Elon, other major tech leaders

It would take a seismic shift for Silicon Valley to actually turn red. In 2020, Santa Clara County, which contains most of Silicon Valley, voted 73 percent for Biden and 25 percent for Trump. (The 2016 numbers were very similar.)And a recent WIRED analysis of campaign contributions found that the venture industry seems to actually be donating to Democrats at a higher rate this cycle than in years past. 

But some of the most influential voices in tech have loudly thrown their lot in with Trump, especially since his assassination attempt. Elon Musk and his associate David Sacks have been active on social media in rallying support among tech executives and have been pumping millions into a Super PAC for Trump’s campaign. 

The crypto industry, in particular, has embraced Trump, who is scheduled to speak at a Bitcoin conference this weekend. Marc Andreessen, the co-founder of the prominent VC firm a16z, has denounced the Biden administration’s more aggressive approach to tech and crypto regulation, and said that he is backing Trump after supporting Democrats through most election cycles, including in 2016.

And many tech moguls have been further energized by Trump’s vice presidential pick of J.D Vance, who has deep Silicon Valley ties, including working for Peter Thiel. Sacks and the tech investor Chamath Palihapitiya even personally lobbied Trump to pick Vance at a $300,000-a-person dinner, the New York Times reported

Read More: ​​How the Crypto World Learned to Love Donald Trump, J.D. Vance, and Project 2025

But Harris has a long history with Silicon Valley

But Harris’s history with Silicon Valley could stem the tide. In recent months, many Silicon Valley Democrats sat on the sidelines as Biden’s campaign lost steam: the entrepreneur and venture capitalist Reid Hoffman told WIRED that tech mega-donors had been withholding their donations due to the “turmoil.” But Hoffman sprang back into action following Biden’s exit, calling Harris “the right person at the right time.” Many others immediately joined him: Harris raised over $50 million in less than 24 hours after Biden’s announcement. 

Hoffman is one of many Silicon Valley powerhouses who supported Harris during her 2020 presidential campaign, due to her connections with the industry stemming from her time as California’s attorney general. Her 2020 donors included Salesforce co-founder and CEO Marc Benioff (who, with Lynne Benioff, is the owner and co-chair of TIME), Amazon general counsel David Zapolsky, and Microsoft president Brad Smith.

Some observers, in turn, argued that Harris was too favorable to the industry while attorney general. Her time as AG was marked by a mass consolidation in tech towards a few hyper-power companies, which critics argue she did little to stop. In 2012, she forged an agreement with Big Tech titans over privacy protections for smartphone owners, which was largely cheered by the industry. The following year, she participated in the marketing campaign for Sheryl Sandberg’s Lean In while being the law enforcement official responsible for overseeing Facebook. 

In contrast, she did wield her position to take an active role in pressuring platforms to ban revenge pornography. And the Biden administration has actually been marked by a hostile relationship with Big Tech, with Biden appointee Lina Khan attempting to use her position at the FTC to break up monopolies. (In a strange twist, J.D. Vance has expressed approval of Khan’s efforts to rein in Big Tech.) Given this trajectory, it’s unclear how friendly Harris will be to the tech industry if she were to assume power. 

“Kamala Harris built very close ties to the California-centric Big Tech industry, but much has changed in the last four years,” says Jeff Hauser, the executive director of the Revolving Door Project. “So it’ll be a question of: was she deeply committed to Big Tech, or was that just kind of like, a home state Senator with a home state industry taking the easy way out?” 

Some tech execs want an open convention

Then there are those in tech leadership who want to support a Democratic candidate, but are calling for the Democrats to select someone who might have a wider appeal to their industry. Aaron Levie, the CEO of Box, wrote on X that following Biden stepping down, the Democrats could gain votes by becoming the party that is “wildly pro tech, trade, entrepreneurship, immigration, AI.”

Reed Hastings, the executive chairman of Netflix, wrote on X that Democratic delegates “need to pick a swing state winner.” The venture capitalist Vinod Khosla agreed—and said that although he believed Harris could beat Trump, he called for an open convention. “I want an open process at the convention and not a coronation,” he wrote. “The key still is who can best beat Trump above all other priorities.”

How to Protect Yourself From Scams Following the CrowdStrike Microsoft IT Outage

21 July 2024 at 15:38

The Microsoft IT Outage that impacted services worldwide on Friday was caused by a software update by third-party cybersecurity technology company CrowdStrike.

According to Microsoft, the outage—which continues to cause disruption—affected 8.5 million Windows devices. Though they note that this is less than one percent of all Windows machines, the outage crashed systems worldwide, with online banking portals and air travel among the services impacted.

[time-brightcove not-tgx=”true”]

The outage was not caused by a cyberattack, but concern has since grown from both CrowdStrike and government-affiliated agencies as to how scammers are capitalizing on the outage and the resulting confusion surrounding malicious cyber activity.

America’s Cyber Defense Agency, the U.K.’s National Cyber Security Centre, and Australia’s National Anti-Scam Centre are among the organizations to issue warnings for consumers to be wary of scams at this time.

Read More: CrowdStrike’s Role In the Microsoft IT Outage, Explained

According to CrowdStrike’s blog, a “likely eCrime actor is using file names capitalizing on July 19, 2024,” specifically utilizing a malicious ZIP archive named “” to take data from customers.

[video id=QT1Veajr autostart="viewable"]

Here is how you can protect yourself from scammers as disruptions from the outage continue to unfold.

Be alert

You’ve already begun this first step. Be aware of phishing scams that have cropped up to capitalize on the CrowdStrike outage and do not download zip-files or software from unknown sources claiming to help with the outage.

When receiving requests for personal information by unknown numbers, be aware, and never share sensitive information to unverified sources.

The U.K’s National Cyber Security Centre has a robust guidance sheet for how organizations and businesses can protect their employees from phishing. This guidance includes four layers of mitigation tactics, from employing anti-spoofing controls to ensuring employees are aware of what phishing looks like and the tactics used to trick users into handing over information or making unauthorized payments.

Go straight to official websites

David Brumley, professor of electrical and computer engineering at Carnegie Mellon University, tells TIME he has seen a few different kinds of scam tactics over the weekend. The most prominent of these include malicious actors pretending to be CrowdStrike, offering to help businesses after the outage. He’s also noticed scammers pretending to be airlines and other organizations, again pretending to offer help to those impacted. The best course of action, Brumley notes, is always to contact business representatives directly.

“If you get a text that purports to be from one of [these businesses] and you feel uncomfortable, always just call them directly,” Brumley says.

CrowdStrike has its own “Remediation and Guidance Hub” on its blog to help those affected, and Microsoft also has its own support page.

Be sure to contact these companies via their official pages and help desks, rather than by responding to texts or emails claiming to be sent from the companies or affiliated parties.

Don’t rush

According to Catriona Lowe, deputy chair of the Australian Competition & Consumer Commission, these scammers often create “a sense of urgency that you need to do what they say to protect your computer and your financial information.” 

The best way to combat this is to slow down and ensure that you are not giving out personal details over text and email, especially to unverified sources.

Report the scam

Different countries have designated websites where you can report scams. In Australia, people can head to Scamwatch for further help. In the U.K., those impacted or concerned can send an email to Meanwhile, in the U.S., people can report instances of fraud via the Federal Trade Commission.

Check in with vulnerable friends and family members

According to the U.S. National Institute of Aging, older adults—defined generally as those above the age of 65—are often the target of scams. When possible, check in with older friends and family to ensure that they have the above tools and are aware of the rise in phishing scams as a result of the outage.

Clare O’Neil, Australia’s Minister for Home Affairs and Minister for Cyber Security, has also pointed out the need to protect those most vulnerable to falling victim to scams. In a series of posts shared on X (formerly Twitter) she said: “It is very important that Australians are extremely cautious of any unexpected texts, calls or emails claiming to be assistance with this issue.” She continued by specifying that people can help by “making sure vulnerable people, including elderly relatives, are being extra cautious at this time.”

What to Know About the Kids Online Safety Act and Its Chances of Passing

21 July 2024 at 13:31
Congress Kids Online Safety

The last time Congress passed a law to protect children on the internet was in 1998 — before Facebook, before the iPhone and long before today’s oldest teenagers were born. Now, a bill aiming to protect kids from the harms of social media, gaming sites and other online platforms appears to have enough bipartisan support to pass, though whether it actually will remains uncertain.

[time-brightcove not-tgx=”true”]

Supporters, however, hope it will come to a vote later this month.

Proponents of the Kids Online Safety Act include parents’ groups and children’s advocacy organizations as well as companies like Microsoft, X and Snap. They say the bill is a necessary first step in regulating tech companies and requiring them to protect children from dangerous online content and take responsibility for the harm their platforms can cause.

Opponents, however, fear KOSA would violate the First Amendment and harm vulnerable kids who wouldn’t be able to access information on LGBTQ issues or reproductive rights — although the bill has been revised to address many of those concerns, and major LGBTQ groups have decided to support of the proposed legislation.

Here is what to know about KOSA and the likelihood of it going into effect.

What would KOSA do?

If passed, KOSA would create a “duty of care” — a legal term that requires companies to take reasonable steps to prevent harm — for online platforms minors will likely use.

They would have to “prevent and mitigate” harms to children, including bullying and violence, the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products such as narcotics, tobacco or alcohol.

Social media platforms would also have to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. They would also be required to limit other users from communicating with children and limit features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards. In general, online platforms would have to default to the safest settings possible for accounts it believes belong to minors.

“So many of the harms that young people experience online and on social media are the result of deliberate design choices that these companies make,” said Josh Golin, executive director of Fairplay, a nonprofit working to insulate children from commercialization, marketing and harms from Big Tech.

How would it be enforced?

An earlier version of the bill empowered state attorneys general to enforce KOSA’s “duty of care” provision but after concerns from LGBTQ groups and others who worried they could use this to censor information about LGBTQ or reproductive issues. In the updated version, state attorneys general can still enforce other provisions but not the “duty of care” standard.

Broader enforcement would fall to the Federal Trade Commission, which would have oversight over what types of content is “harmful” to children.

Who supports it?

KOSA is supported a broad range of nonprofits, tech accountability and parent groups and pediatricians such as the American Academy of Pediatrics, the American Federation of Teachers, Common Sense Media, Fairplay, The Real Facebook Oversight Board and the NAACP. Some prominent tech companies, including Microsoft, X and Snap, have also signed on. Meta Platforms, which owns Facebook, Instagram and WhatsApp, has not come out in firm support or opposition of the bill, although it has said in the past that it supports the regulation of social media.

ParentSOS, a group of some 20 parents who have lost children to harm caused by social media, has also been campaigning for the bill’s passage. One of those parents is Julienne Anderson, whose 17-year-old daughter died in 2022 after purchasing tainted drugs through Instagram.

“We should not bear the entire responsibility of keeping our children safe online,” she said. “Every other industry has been regulated. And I’m sure you’ve heard this all the time. From toys to movies to music to, cars to everything. We have regulations in place to keep our children safe. And this, this is a product that they have created and distributed and yet over all these years, since the ’90s, there hasn’t been any legislation regulating the industry.”

KOSA was introduced in 2022 by Senators Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn. It currently has 68 cosponsors in the Senate, from across the political spectrum, which would be enough to pass if it were brought to a vote.

Who opposes it?

The ACLU, the Electronic Frontier Foundation and other groups supporting free speech are concerned it would violate the First Amendment. Even with the revisions that stripped state attorneys general from enforcing its duty of care provision, EFF calls it a “dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like.”

Kate Ruane, director of the Free Expression Project at the nonprofit Center for Democracy and Technology, said she remains concerned that the bill’s care of duty provision can be “misused by politically motivated actors to target marginalized communities like the LGBTQ population and just politically divisive information generally,” to try to suppress information because someone believes it is harmful to kids’ mental health.

She added that while these worries remain, there has been progress in reducing concerns.

The bigger issue, though, she added, is that platforms don’t want to get sued for showing minors content that could be “politically divisive,” so to make sure this doesn’t happen they could suppress such topics — about abortion or transgender healthcare or even the wars in Gaza or Ukraine.

Sen. Rand Paul, R-K.Y., has also expressed opposition to the bill. Paul said the bill “could prevent kids from watching PGA golf or the Super Bowl on social media because of gambling and beer ads, those kids could just turn on the TV and see those exact same ads.”

He added he has “tried to work with the authors to fix the bill’s many deficiencies. If the authors are not interested in compromise, Senator (Chuck) Schumer can bring the bill to the floor, as he could have done from the beginning.”

Will it pass Congress?

Golin said he is “very hopeful” that the bill will come to a vote in July.

“The reason it has it has not come to a vote yet is that passing legislation is really hard, particularly when you’re trying to regulate one of the, if not the most powerful industry in the world,” he said. “We are outspent.”

Golin added he thinks there’s a “really good chance” and he remains very hopeful that it will get passed.

Senate Majority Leader Schumer, D-N.Y., who has come out in support of KOSA, would have to bring it to a vote.

Schumer has backed the legislation but has not yet set aside floor time to pass it. Because there are objections to the legislation, it would take a week or more of procedural votes before a final vote.

He said on the floor last week that passing the bill is a “top priority” but that it had not yet moved because of the objections.

“Sadly, a few of our colleagues continue to block these bills without offering any constructive ideas for how to revise the text,” he said. “So now we must look ahead, and all options are on the table.”

Here Are the States 911 Is Impacted Due to the Microsoft Outage—And What to Do

19 July 2024 at 19:19

A worldwide Microsoft outage impacted 911 services in at least three U.S. states Friday, as hospitals and government agencies still recover from the outage’s impacts.

Alaska, Arizona, and Oregon all reported disruptions to their emergency systems, though some are already reporting improvements.

“My team is closely monitoring all services that have been impacted and is working to ensure that we continue delivering the critical services that Arizonans rely on,” said Arizona Gov. Katie Hobbs on X Friday morning. “As we work to address the problem, there may be delays with certain services. I will continue to keep Arizonans updated as we receive new information.”

[time-brightcove not-tgx=”true”]
[video id=QT1Veajr autostart="viewable"]

The outage caused companies relying on Microsoft’s Windows system, including airlines and banks, to pause their work because of a faulty update by Crowdstrike, a partner of Microsoft. Non-emergency operations were suspended across numerous hospital systems and local TV channels temporarily paused their shows on Friday.

Emergency calls to the 911 dispatch center throughout Alaska were down but returned as of 4:23 a.m. local time, the Alaska State Troopers said via Facebook. The State Troopers also shared alternative phone numbers that could be called based on residents’ location in Alaska.

In Phoenix, local police noted that their computerized 911 dispatch center was down, but the 911 call line was still operational. Systems were confirmed to be restored by 8:49 a.m. E.T., but Phoenix Police asked people who were seeking “non-emergency police assistance during the outage” to remain patient as officials worked through the calls. 

Across Oregon, 911 calls were still functioning, though the Bureau of Emergency Communications (BOEC), tells TIME that there were some reports of issues with their Mobile Data Terminals and Computer Aided Dispatch (CAD) systems. Workers were able to manually take calls, however.

“I am grateful to the Bureau of Emergency Management and Bureau of Technology Services staff who quickly responded to the outage to help ensure continuation of critical city services,” said Portland Mayor Ted Wheeler in a press release. “I am continuing to receive regular updates and we are closely monitoring the situation.”

In cases where 911 does not seem to be working, residents should check their local police, fire, or emergency management organizations social media pages or websites to find local emergency numbers.

“We’re deeply sorry for the impact that we’ve caused to customers, to travelers, to anyone affected by this, including our companies,” said CrowdStrike CEO on Friday. “That update had a software bug in it and caused an issue with the Microsoft operating system…we identified this very quickly and remediated the issue.”

CrowdStrike’s Role In the Microsoft IT Outage, Explained

19 July 2024 at 15:23

The major Microsoft IT outage on Friday that grounded flights, sent TV stations off air, and disrupted online hospital systems has been linked to a third party—a cybersecurity technology firm named CrowdStrike. 

CrowdStrike’s CEO George Kurtz has spoken out about the outage, apologizing for the disruption caused. 

[time-brightcove not-tgx=”true”]

As the fallout from the event continues to impact people worldwide, here’s a breakdown of how exactly CrowdStrike is involved and what transpired.

Read More: How to Protect Yourself From Scams Following the CrowdStrike Microsoft IT Outage

What caused the Microsoft outage? 

Early Friday, companies in Australia running Microsoft’s Windows operating system started reporting devices showing, what is commonly referred to as, “blue screens of death.” According to Microsoft’s website, this happens “if a serious problem causes Windows to shut down or restart unexpectedly.”

These disruptions then spread rapidly, impacting companies and communities around the world. The U.K., India, Germany, the Netherlands, and the U.S., reported disruptions. Meanwhile,  United, Delta, and American Airlines issued a “global ground stop” on all flights.

The cause of this outage came from a faulty update from CrowdStrike, deployed to computers running Microsoft Windows. The issue was specifically linked to Falcon, one of the companies main products, which does not impact Mac or Linux operating systems.

Launched in 2012 CrowdStrike’s cybersecurity software is now used by 298 of Fortune 500 companies, including banks, energy companies, healthcare companies, and food companies.

According to David Brumley, professor of electrical and computer engineering at Carnegie Mellon University, this was a perfect storm of issues. “Their code is buggy, and it was sitting there as a ticking time bomb,” Brumley says.

He says there are three steps cybersecurity teams should typically implement when rolling out an update. First, there should have been rigorous software testing to catch bugs; second, there should have been testing on different types of machines; and third, the roll out should have been slow with smaller sets of users to screen for negative ramifications.

“Companies like Google will roll out updates incrementally so if the update is bad, at least it will have limited damage,” says Brumley, adding that the issue may only get more pronounced.

 “What we’re seeing and what we’ll continue to see is a huge consolidation in the cybersecurity department, and that’s why we’re seeing so many people affected at once,” says Brumley. “We need to be asking, ‘What choices can we give people if companies mess up?’”

How has CrowdStrike responded to the outage felt worldwide?

Appearing via a video link on The Today Show on Friday, CrowdStrike’s CEO delivered an apology to the public:

“We’re deeply sorry for the impact that we’ve caused to customers, to travelers, to anyone affected by this, including our companies,” Kurtz said. “That update had a software bug in it and caused an issue with the Microsoft operating system…we identified this very quickly and remediated the issue.”

Kurtz was clear that this was not a cybersecurity issue nor an attack of any kind, but an issue coming from inside the company.

Though they’ve deployed the changes necessary to help remedy the issue, customers are still having issues, and it may be some time before systems across the globe are all fully operational.

In a statement emailed to TIME, CrowdStrike said that they are “actively working with customers impacted by a defect found in a single content update for Windows hosts.”

They also clarified, once more, for those concerned that the issue is not a security incident, and that the problem has been “identified, isolated, and a fix has been deployed.”

Kurtz has also shared this information on his personal X (formerly Twitter) account.

CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted. This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed. We…

— George Kurtz (@George_Kurtz) July 19, 2024

According to Forbes, Kurtz’s net worth had dropped $300 million as of Friday afternoon—from $3.2 billion to $2.9 billion–amid fallout from the IT outage.The CEO’s wealth is enmeshed with CrowdStrike shares, which dropped drastically following the incident. 

On The Today Show segment, Kurtz said that CrowdStrike has been on the phone with customers all night, and that the issue was resolved for many when they rebooted their systems.However, he says the company will not “relent until we get every customer back to where they were and keep the bad guys out of their systems.”

If hosts are still crashing and unable to stay online to download CrowdStrike’s fix, the company has provided a workaround to the issue on its blog.

How has Microsoft responded to the IT outage?

On Thursday night, Microsoft 365 posted on X that the company was “working on rerouting the impacted traffic to alternate systems to alleviate impact” and that they were “observing a positive trend in service availability.”

As the disruption continued on Saturday, David Weston, Vice President of Enterprise and OS Security at Microsoft, published a blog post titled, “Helping our customers through the CrowdStrike outage.”

In the blog post, Weston said that Microsoft estimates “CrowdStrike’s update affected 8.5 million Windows devices, or less than one percent of all Windows machines.” Still, he goes on to say that the outage “demonstrates the interconnected nature of our broad ecosystem—global cloud providers, software platforms, security vendors and other software vendors, and customers.”

Weston also stated that Microsoft is “working around the clock” to help customers. He referenced the steps they are taking with CrowdStrike to mediate the effects of the outage, the company’s own post demonstrating manual fixes of the issue. Customers can also track the status of the incident through the “Azure Status Dashboard.”

TIME has reached out to Microsoft 365 for further comment.

Microsoft IT Outage Disrupts Flights, Banks, Media Outlets, and Companies Worldwide

APTOPIX Germany Worldwide Internet Outage

FRANKFURT, Germany — A global technology outage caused by a faulty software update grounded flights, knocked banks and media outlets offline, and disrupted hospitals, small businesses and other services on Friday, highlighting the fragility of a digitized world dependent on just a handful of providers.

[time-brightcove not-tgx=”true”]

The trouble with the update issued by cybersecurity firm CrowdStrike and affecting computers running Microsoft Windows was not a hacking incident or cyberattack, according to CrowdStrike, which apologized and said a fix was on the way.

But hours later, the disruptions continued — and escalated.

Long lines formed at airports in the U.S., Europe and Asia as airlines lost access to check-in and booking services at a time when many travelers are heading away on summer vacations. Hospitals and doctors’ offices had problems with their appointment systems, and cancelled non-urgent surgeries. Several TV stations in the U.S. were also prevented from airing local news early Friday.

Saskia Oettinghaus, a member of the German Olympic diving team, was among those stuck at the Berlin Airport.

“We are on our way to Paris for the Olympic Games and now we are at a standstill here for the time being,” Oettinghaus said.

Other athletes and spectators traveling to Paris were delayed, as were their uniforms and accreditations, but Games organizers said disruptions were limited and didn’t affect ticketing or the torch relay.

A disturbing reminder of vulnerability

“This is a very, very uncomfortable illustration of the fragility of the world’s core internet infrastructure,” said Ciaran Martin, a professor at Oxford University’s Blavatnik School of Government and former Head of Britain’s National Cyber Security Centre.

Cyber expert James Bore said real harm would be caused by the outage because systems people have come to rely on at critical times are not going to be available. Hospitals, for example, will struggle to sort out appointments and those who need care may not get it — and it will lead to deaths, he said.

“All of these systems are running the same software,” Bore said. “We’ve made all of these tools so widespread that when things inevitably go wrong — and they will, as we’ve seen — they go wrong at a huge scale.”

The head of Germany’s IT security agency, Claudia Plattner, said that “the problems will last some time — we can’t expect a very quick solution.” A forecast for when exactly all systems will be up and running is difficult, but “it won’t be hours,” she added.

Microsoft spokesperson Frank X. Shaw confirmed in an emailed statement that “a CrowdStrike update was responsible for bringing down a number of Windows systems globally.” Earlier, the company had posted on the social media platform X that it was working to “alleviate impact” and that they were “observing a positive trend in service availability.”

During an interview on NBC’s “Today Show” Friday, CrowdStrike CEO George Kurtz apologized for the outage, saying the company was “deeply sorry for the impact that we’ve caused to customers, to travelers, to anyone affected by this, including our companies.”

“We know what the issue is” and are working to remediate it, Kurtz said.

“It was only the Microsoft operating system” that was affected, though it didn’t happen on every Microsoft Windows system, he said.

The Austin, Texas-based company’s shares were down nearly 10% in early trading Friday.

A recording playing on its customer service line said, “CrowdStrike is aware of the reports of crashes on Microsoft ports related to the Falcon sensor,” referring to one of its products used to block online attacks.

Broadcasters go dark, surgeries delayed, ‘blue screens of death’

Meanwhile, governments and companies across the world scrambled to respond.

The White House said President Joe Biden was briefed on the outage and his team has been touch with the company and other impacted entities.

New Zealand’s acting prime minister, David Seymour, said on X that officials in the country were “moving at pace to understand the potential impacts,” adding that he had no information indicating it was a cybersecurity threat.

The issue was causing “inconvenience” for the public and businesses, he added.

On the Milan stock exchange, the FTSE MIB index of blue-chip Italian stocks could not be compiled for an hour, though trading continued.

Major delays reported at airports grew on Friday morning, with most attributing the problems in booking systems of individual airlines.

In the U.S., airlines United, American and Delta said that at least some flights were resuming after severe disruptions, though delays would persist.

Airlines and railways in the U.K. were also affected, with longer than usual waiting times.

In Germany, Berlin-Brandenburg Airport halted flights for several hours due to difficulties in checking in passengers, while landings at Zurich airport were suspended and flights in Hungary, Italy and Turkey disrupted.

The Dutch carrier KLM said it had been “forced to suspend most” of its operations.

Amsterdam’s Schiphol Airport warned that the outage was having a “major impact on flights” to and from the busy European hub. The chaotic morning coincided with one of the busiest days of the year for Schiphol.

Widespread problems were reported at Australian airports, where lines grew and some passengers were stranded as online check-in services and self-service booths were disabled — although flights were still operating. Meanwhile, passengers stood in long lines at Rome’s Leonardo Da Vinci airport after flights were cancelled following a global power outage.

In New England, the outage led to delays at airports and for some hospitals to cancel appointments.

At Mass General Brigham, the largest health care system in Massachusetts, all scheduled non-urgent surgeries, procedures, and medical visits were cancelled Friday because of the outage, according to a spokesperson. Emergency departments remain open and care for patients in the hospital has not been impacted.

Australia is particularly affected by outages

While the outages were being experienced worldwide, Australia appeared to be severely affected by the issue. Disruption reported on the site DownDetector included the banks NAB, Commonwealth and Bendigo, and the airlines Virgin Australia and Qantas, as well as internet and phone providers such as Telstra.

National news outlets — including public broadcaster ABC and Sky News Australia — were unable to broadcast on their TV and radio channels for hours. Some news anchors went on air online from dark offices, in front of computers showing “blue screens of death.”

Hospitals in several countries also reported problems.

Britain’s National Health Service said the outage caused problems at most doctors’ offices across England. NHS England said in a statement said the glitch was affecting the appointment and patient record system used across the public health system.

Some hospitals in northern Germany canceled all elective surgery scheduled for Friday, but emergency care was unaffected.

Shipping was disrupted too: A major container hub in the Baltic port of Gdansk, Poland, the Baltic Hub, said it was battling problems resulting from the global system outage.

Trump Requests $844,600 for Fundraiser Seat at Bitcoin Conference

18 July 2024 at 20:29
The 2024 Republican National Convention

Donald Trump is inviting supporters in the cryptocurrency industry to a private fund-raising effort in Nashville on July 27, including an asking price of $844,600 for a seat at a round table. 

Donors have also been offered an opportunity to snap a photo with the presidential candidate for $60,000 per person — slightly less than the current price of one Bitcoin — or $100,000 per couple, according to an invitation to the event. The fundraiser will be hosted amid the Bitcoin Conference 2024, an annual event organized by BTC Media LLC for fans of the original cryptocurrency. Trump is set to speak on the main stage of the conference the same day.

[time-brightcove not-tgx=”true”]

The asking price of $844,600 for the round-table seat represents the maximum combined campaign contribution to the Trump campaign and the Republican National Committee that’s allowed under campaign finance laws. 

Special guests in Nashville will include Trump’s vice presidential pick JD Vance, a senator from Ohio, as well as the former president’s Republican primary opponent Vivek Ramaswamy, Tennessee Senator Bill Hagerty and former Hawaii Representative Tulsi Gabbard will be at the reception, according to an email describing the event obtained by Bloomberg from an invitee, who asked not to be identified since the event is private. Attendance will be limited to 100 to 150 donors, who “will enjoy drinks and hors d’oeuvres while mingling with influential guests,” according to the email. Following the reception, guests will get front-row seats to watch Trump deliver a speech on Bitcoin, the message added.

The Trump campaign did not immediately reply to a request for comment, nor did the people listed as special guests at the event. 

The Nashville fundraising effort is the latest sign of the about-face Trump has made when it comes to his stance on crypto. He has expressed support for Bitcoin after meeting with crypto-mining executives at his Mar-a-Lago club last month. Trump told attendees at that event that he loves and understands cryptocurrency and the benefits that Bitcoin miners bring to power grids. That is a departure from his stance on the asset class five years ago when, as president, he said he was not a fan of cryptocurrencies because their values are based on “thin air” and they can facilitate drug trafficking and other crimes.

How the Crypto World Learned to Love Donald Trump, J.D. Vance, and Project 2025

17 July 2024 at 17:29
Trump Vance

When the pandemic hit in 2020, the DJ and personal trainer Jonnie King stopped getting booked for gigs and workout sessions. So he turned to trading crypto, which was rapidly increasing in value at the time. “I was like, ‘Oh my god, there’s hope for me. I can make money while stuck at home,’” he says. 

[time-brightcove not-tgx=”true”]

Four years later, King is a devout believer who keeps most of his assets in cryptocurrencies. And although he voted for Bernie Sanders in 2016—due to Sanders’ focus on uplifting the working class—King is now a vocal supporter of Donald Trump, due to Trump’s own recent embrace of crypto.

“I can probably say it’s a single vote issue for me, because that’s my livelihood,” King tells TIME. “Crypto is how I save my wealth, and if [the Democrats] are trying to attack that, that’s literally taking my money away from me. How am I supposed to support my family?” 

King exemplifies a growing faction from within the cryptocurrency community supporting Trump with open arms. For years, both during his presidency and after, Trump expressed distrust in crypto. In 2021 he went as far as to say that Bitcoin seemed like a scam. But leading up to the 2024 election, Trump has done an about-face and lavished praise onto the technology. And in just the last week, he took several more significant steps to win over the crypto faithful: he announced an appearance at a Bitcoin conference in Nashville on July 27, a new NFT project, and chose a staunchly pro-crypto vice-presidential candidate in J.D. Vance. 

The crypto world has returned the enthusiasm. Despite any misgivings they may have with other parts of Trump’s platform or criminal convictions, many believe he will provide a significant boon for the industry should be elected. The crypto community on X, formerly known as Twitter, is filled with pro-Trump sentiment, and crypto money is pouring into Trump’s campaign. And in the aftermath of Trump’s shooting, Bitcoin shot up in price, seemingly based on the belief that the event helped Trump’s chances of being elected. 

“Trump has had an incredible and surprisingly positive impact on this space,” Kristin Smith, the CEO of the crypto lobbying group The Blockchain Association, tells TIME. “That was not on my 2024 bingo card.” 

Trump’s crypto U-turn

Trump hasn’t gone into much detail about his newfound love for crypto after criticizing it for so many years. But he has used the industry as a wedge issue, directly contrasting himself with leftist crypto skeptics like Elizabeth Warren. And because the crypto lobby is well-organized and flush with money, it offers Trump a whole lot of potential cash.

Trump has attended several fundraisers full of cryptocurrency executives, who promised to throw him more fundraisers, according to The Washington Post. Crypto moguls Tyler and Cameron Winklevoss each donated $1 million in Bitcoin to Trump, criticizing Biden’s “war against crypto,” and Trump discussed crypto policy with pro-crypto entrepreneur Elon Musk, according to Bloomberg. (Musk has since endorsed Trump.) The price tag of attending a “VIP reception” with Trump at the upcoming Bitcoin conference is a cool $844,600 per person.

When Trump announced his campaign would accept cryptocurrency donations, a statement on his website read that the decision was part of a larger fight against “socialistic government control” over the U.S. financial markets. (Joe Biden hasn’t said much publicly about crypto, but his administration has supported stricter policies designed to protect consumers.)

Read More: Why Donald Trump Is Betting on Crypto

And earlier this month, The Post reported that a Trump advisor added language about crypto to the Republican Party platform, which surprised longtime party members. Part of the passage read: “We will defend the right to mine Bitcoin, and ensure every American has the right to self-custody of their digital assets and transact free from government surveillance and control.” (Government agencies currently use blockchain tracing to track crypto scammers and other criminals.)  

Read More: Inside the Health Crisis of a Texas Bitcoin Town

J.D. Vance, Trump’s VP pick, increases his crypto bona fides

On Monday, Trump further energized crypto fans by choosing the pro-crypto Senator J.D. Vance as his running mate. While running for Senate in 2021, Vance disclosed that he owned over $100,000 worth of Bitcoin. The same year, he called the crypto community “one of the few sectors of our economy where conservatives and other free thinkers can operate without pressure from the social justice mob.” Vance also received significant campaign funding from pro-crypto entrepreneur Peter Thiel. 

Earlier this year, Vance circulated draft legislation to overhaul crypto regulation and make clearer whether specific crypto tokens should be regulated by the SEC or the CFTC. Politico reported that the proposal seems to be “more industry-friendly” than previously-introduced bills. 

The crypto industry has largely cheered the idea of a personal holder of Bitcoin potentially entering the White House next year. “Senator Vance—an emerging voice for fit-for-purpose, pro-innovation crypto legislation—is an ideal candidate to lead the Republican Party’s crypto principles,” Kristin Smith wrote to TIME in an email. 

[video id=sFzczKg5]

Project 2025 also supports the crypto industry

Looming over the election is Project 2025, a far-reaching conservative blueprint led by the Heritage Foundation which spells out the policies that Trump should enact if he is elected, including launching mass deportations and countering “anti-white” discrimination. While Trump distanced himself from the proposal on Truth Social, dozens of Trump allies and former administration officials are connected to the project. 

The crypto industry is excited by crypto-related language in Project 2025. The document calls on the president to abolish the Federal Reserve (whose monetary policies have long been abhorred by crypto advocates) and move the U.S. to a free banking system, in which the dollar is backed by a valuable commodity like gold—or, crypto enthusiasts hope, Bitcoin itself. However, there’s been no indication that Trump or anyone in his administration has considered the idea. The document also calls on regulators to clarify rules around cryptocurrencies, just like Vance is pushing for, which could open the door for greater crypto adoption. 

Read More: What is Project 2025? 

Questions about Trump’s commitment to Bitcoin linger

Despite all this, there are crypto fans who are skeptical that Trump’s sudden embrace of Bitcoin will carry lasting weight beyond an election year talking point. Some of Trump’s avowed policy proposals, which have been described as authoritarian, seem to counteract Bitcoin’s anti-government, libertarian bent. For instance, his call for all Bitcoin mining to be located in the U.S. rubbed certain crypto idealists the wrong way, as decentralization and immunity to governmental pressures is a key part of the ethos of crypto mining.

Moe Vela, a former advisor to Biden and a senior advisor to the cryptocurrency project Unicoin, is skeptical of Trump’s intentions. “It was not long ago that he was bashing crypto,” he says. “The crypto community tends to be a bit inexperienced when it comes to legislation, policy and politics—and I encourage them to not fall prey to the pandering.”

Vela argues that “healthy and balanced” regulation of crypto is essential to the industry’s growth. “If we don’t have regulation that weeds out nefarious actors—and we’ve already seen we have our fair share of bad actors—that weakens trust and confidence in the sector,” he says.

And Vitalik Buterin, main founder of cryptocurrency Ethereum, wrote a blog post on June 17 cautioning crypto enthusiasts not to cast votes simply based on a candidate’s crypto position. “Making decisions in this way carries a high risk of going against the values that brought you into the crypto space in the first place,” he wrote.

Some polls suggest that crypto is still an extremely niche interest. The Federal Reserve found that just 7% of American adults used or held crypto in 2023, and another poll suggested that anti-crypto sentiment remains high. But the crypto industry is convinced that there could be thousands of single-issue crypto voters, like Jonnie King, who will lift Trump in the coming election. 

“Maybe it’s just a politician being a politician to win votes,” King says of Trump’s pro-crypto stance. “I’m not saying any man is perfect. But when Biden is campaigning a war against crypto, the one system that is hope for money, I see that as no way going forward. 

“If Trump can give us some hope—even if it’s just hope—it’s something.” 

Malaysia Looks to Criminalize Cyberbullying After TikTok User’s Death

17 July 2024 at 06:15
The TikTok logo is seen on a mobile device screen

The death of a Malaysian TikTok user has prompted the government to look into criminalizing cyberbullying and increasing accountability among internet service providers.

Rajeswary Appahu was found dead from apparent suicide on July 5, a day after the 30-year-old lodged a police report over online threats she had received, local media reported. That led to two people pleading guilty before the courts on Tuesday over communication offenses on TikTok, with one of them receiving a 100 ringgit ($21.40) fine as punishment.

[time-brightcove not-tgx=”true”]

Such investigations and prosecutions are difficult because there are no specific provisions for cyberbullying under Malaysian laws, according to Law Minister Azalina Othman Said. The government will consider proposals to define “cyberbullying” and make it a crime under the Penal Code, she added.

“Cyberbullying isn’t a new issue in Malaysia, and each year, we are shocked by news of individuals being bullied, which end with them taking their own lives,” she said in a statement Tuesday.

The government is also refining policy on proposals to draft a bill to increase internet service providers’ accountability on matters of security, she said. It would provide enforcement officers new powers to work closely with internet service providers to protect online users, she said.

The Malaysian Communications and Multimedia Commission said separately it would work with the police to facilitate public complaints on cyberbullying. The commission also planned to hold a nationwide tour to spread its anti-bullying message, it said in a statement Saturday.

If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider. For international resources, click here.

Musk to Move X, SpaceX Headquarters to Texas From California

Twitter Starts To Rebrand Its San Francisco Headquarters With Giant X Logo

Elon Musk said he will relocate the headquarters for X and SpaceX to Texas, a likely symbolic move that adds more fuel to the billionaire’s efforts to align himself with the political right and distance himself from left-leaning California. 

Musk made the announcements on his X social media site Tuesday, citing frustration over a new law in California related to transgender children in public schools. California became the first US state to ban school districts from requiring teachers to notify parents about changes to a student’s sexual orientation and gender identity.

[time-brightcove not-tgx=”true”]

“This is the final straw,” Musk said in the post announcing SpaceX’s relocation.

The move is the latest development in Musk’s shift toward the political right. In the past week, Musk offered a full-throated endorsement of former President Donald Trump in the upcoming US election, and will also donate tens of millions of dollars to Trump’s campaign every month. He has long criticized California’s liberal politics, and has threatened to pull X and his other businesses out of the state on numerous occasions. 

This is the final straw.

Because of this law and the many others that preceded it, attacking both families and companies, SpaceX will now move its HQ from Hawthorne, California, to Starbase, Texas.

— Elon Musk (@elonmusk) July 16, 2024

SpaceX’s headquarters is currently in Hawthorne, California, but the company has been building out a large facility in South Texas dubbed Starbase over the last few years. The site in Boca Chica is the primary location where SpaceX builds and launches its massive Starship rocket system, and the company recently added a massive warehouse factory at Starbase known as the Starfactory, which replaced many of the site’s production tents.

X’s headquarters is currently in San Francisco, though the company put several floors of its main building up for lease last week. It was still expected to retain some of that space for employees. In January, X said it was planning to open a small office in Austin to help deal with content moderation problems.

SpaceX has roughly 13,000 employees. Its Hawthorne facility has been the primary location for production and processing of the company’s Falcon 9 workhorse rocket, as well as the larger, more powerful Falcon Heavy rocket.

Texas Governor Greg Abbott said in an X post that the move “cements Texas as the leader in space exploration.”

The new California law may be personal for Musk. One of his eldest children went to court the day after they turned 18 in 2022 and changed their name, citing “gender identity and the fact that I no longer live with or wish to be related to my biological father in any way, shape or form,” according to court filings.

Musk also has several ties to Texas already. His electric car company, Tesla Inc., earlier this year moved its business incorporation to Texas from Delaware, and similarly moved its headquarters from California to Austin in 2021 amid frustration with pandemic lockdowns. 

But Tesla still has a sizable presence in the Golden State, with an engineering headquarters in Palo Alto. 

Musk also moved his personal residence to Texas several years ago. 

Hong Kong Testing ChatGPT-Style Tool After OpenAI Took Steps to Block Access

16 July 2024 at 10:42
OpenAI and ChatGPT

HONG KONG — Hong Kong’s government is testing the city’s own ChatGPT -style tool for its employees, with plans to eventually make it available to the public, its innovation minister said after OpenAI took extra steps to block access from the city and other unsupported regions.

Secretary for Innovation, Technology and Industry Sun Dong said on a Saturday radio show that his bureau was trying out the artificial intelligence program, whose Chinese name translates to “document assistance application for civil servants,” to further improve its capabilities. He plans to have it available for the rest of the government this year.

[time-brightcove not-tgx=”true”]

The program was developed by a generative AI research and development center led by the Hong Kong University of Science and Technology in collaboration with several other universities.

Sun said the model would provide functions like graphics and video design in the future. To what degree it would compare to the capabilities of ChatGPT was unclear.

Sun’s bureau did not respond to The Associated Press’ questions about the model’s functions.

Sun said on the radio show that industry players and the government would play a role in the model’s future development.

“Given Hong Kong’s current situation, it’s difficult for Hong Kong to get giant companies like Microsoft and Google to subsidize such projects, so the government had to start doing it,” he said.

Beijing and Washington are locked in a race for AI supremacy, with China having ambitions to become the global leader in AI by 2030.

China, including Hong Kong and neighboring Macao, is not on the list of “supported countries and territories” of OpenAI, one of the best-known artificial intelligence companies.

The ChatGPT maker has not explained why certain territories were excluded but said accounts in those places attempting to access its services may become blocked.

According to a post on OpenAI’s online forum and local media reports, the company announced in an email to some users that it would be taking additional measures to block connections from regions not on the approved list starting July 9. It did not explain the reasons behind the latest move.

Francis Fong, the honorary president of the Hong Kong Information Technology Federation, said it was hard to say whether the capabilities of the program in Hong Kong could match those of ChatGPT. With the input of AI companies in the city, Fong said he believed it could technologically catch up with the standards.

“Will it become the top? Maybe may not necessarily be as close as that. But I believe it won’t be too far behind,” he said.

He also said a locally developed AI program might more accurately address local language and localized issues, but adding it would “make sense” if the final product appears to be “politically correct.”

Like most foreign websites and applications, ChatGPT is technically unavailable in China because of the country’s firewall, which censors the internet for residents. Determined individuals can still gain access via commonly available “virtual private networks” that bypass restrictions.

Chinese tech giants such as Alibaba and Baidu have already rolled out primarily Chinese-language AI models similar to ChatGPT for public and commercial use. However, these AI models must abide by China’s censorship rules.

In May, China’s cyberspace academy said an AI chatbot was being trained on President Xi Jinping’s doctrine, a stark reminder of the ideological parameters within which Chinese AI models will operate.

Also in May, SenseTime, a major Chinese artificial intelligence company, launched SenseChat for users in Hong Kong, where most of the population speaks Cantonese as their mother tongue rather than Mandarin. But a check on Tuesday found the application could not provide answers to politically sensitive questions, such as what the Tiananmen crackdown in 1989 and Hong Kong’s protests in 2019 were about.

During the 1989 crackdown, Chinese troops opened fire on student-led pro-democracy protesters, resulting in hundreds, if not thousands, dead, and that remains a taboo subject in mainland China.

In 2019, protests that started over unpopular Hong Kong legislation morphed into an anti-government movement and the greatest political challenge to Beijing’s rule since the former British colony returned to China in 1997.

Here’s What AT&T Customers Impacted By the Major Data Security Breach Should Do Now

12 July 2024 at 17:17
In this photo illustration, the world's largest

On Friday, AT&T announced that the data of nearly all of their over 100 million customers was downloaded to a third-party platform in a security breach dating back to 2022. The affected parties include AT&T’s cellular customers, customers of mobile virtual network operators using AT&T’s wireless network, and other phone numbers that an AT&T wireless number interacted with during this time, including AT&T landline customers.

[time-brightcove not-tgx=”true”]

A company investigation determined that compromised data includes files containing AT&T records of calls and texts between May 1, 2022 and Oct. 31, 2022, as well as on Jan. 2, 2023. But they confirmed that the breach did not include the content of any said calls or texts, nor the timestamps. It also doesn’t have any details such as Social Security numbers, dates of birth, or other personally identifiable information.

The company has shared advice to customers on what the breach means for their data safety and how to protect themselves.

Luckily, AT&T does not believe that the data is publicly available, yet does not know what exactly is being done with it. 

“We have confirmed that the affected third-party cloud-based workspace has been secured,” AT&T spokesperson Alex Byers told TIME in an emailed statement. “We sincerely regret this incident occurred and remain committed to protecting the information in our care.”

AT&T says it is contacting the customers whose data was compromised by the data breach. Customers can also check the status of their myAT&T, FirstNet, and business AT&T accounts to see if their data was affected through their account profile.

Until December 2024, those impacted by the data breach will be able to receive the phone numbers of the calls and texts compromised by the data breach. Current customers can request this data through their AT&T profile. Active AT&T wireless and home phone customers can get help here, while AT&T Prepaid customers can submit a data request.

Prior customers who were with AT&T during the affected time frame can access their breached data through a data request. If customers cannot provide their case number, they can still submit a legal demand subpoena to their registered agent, CT Corp, for handling and processing, according to AT&T.

AT&T’s website also recommends customers protect themselves from phishing and scamming through multiple avenues, including only opening text messages from people that customers know, never replying to a text from an unknown sender with personal details, going directly to a company’s website, and looking for the “s” after the http in the address of a website to ensure its security. 

The telecommunications giant also recommended that customers forward suspicious text activity AT&T—a free service that does not count towards any text plan—and report fraud to AT&T’s fraud team.

What We Know About the New U.K. Government’s Approach to AI

12 July 2024 at 17:06
Labour Party Conference 2023

When the U.K. hosted the world’s first AI Safety Summit last November, Rishi Sunak, the then Prime Minister, said the achievements at the event would “tip the balance in favor of humanity.” At the two-day event, held in the cradle of modern computing, Bletchley Park, AI labs committed to share their models with governments before public release, and 29 countries pledged to collaborate on mitigating risks from artificial intelligence. It was part of the Sunak-led Conservative government’s effort to position the U.K. as a leader in artificial intelligence governance, which also involved establishing the world’s first AI Safety Institute—a government body tasked with evaluating models for potentially dangerous capabilities. While the U.S. and other allied nations subsequently set up their own similar institutes, the U.K. institute boasts 10 times the funding of its American counterpart. 

[time-brightcove not-tgx=”true”]

Eight months later, on July 5, after a landslide loss to the Labour Party, Sunak left office and the newly elected Prime Minister Keir Starmer began forming his new government. His approach to AI has been described as potentially tougher than Sunak’s.  

Starmer appointed Peter Kyle as science and technology minister, giving the lawmaker oversight of the U.K.’s AI policy at a crucial moment, as governments around the world grapple with how to foster innovation and regulate the rapidly developing technology. Following the election result, Kyle told the BBC that “unlocking the benefits of artificial intelligence is personal,” saying the advanced medical scans now being developed could have helped detect his late mother’s lung cancer before it became fatal.

Alongside the potential benefits of AI, the Labour government will need to balance concerns from the public. An August poll of over 4,000 members of the British public conducted by the Centre for Data Ethics and Innovation found 45% respondents believed AI taking people’s jobs represented one of the biggest risks posed by the technology; 34% believed loss in human creativity and problem solving was one of the greatest risks.

Here’s what we know so far about Labour’s approach to artificial intelligence.

Regulating AI

One of the key issues for the Labour government to tackle will likely be how to regulate AI companies and AI-generated content. Under the previous Conservative-led administration, the Department for Science, Innovation and Technology (DSIT) held off on implementing rules, saying that “introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from AI,” in a 2024 policy paper about AI regulation. Labour has signaled a different approach, promising in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models,” suggesting a greater willingness to intervene in the rapidly evolving technology’s development.

Read More: U.S., U.K. Announce Partnership to Safety Test AI Models

Labour has also pledged to ban sexually explicit deepfakes. Unlike proposed legislation in the U.S., which would allow victims to sue those who create non-consensual deepfakes, Labour has considered a proposal by Labour Together, a think-tank with close ties to the current Labour Party, to impose restrictions on developers by outlawing so-called nudification tools

While AI developers have made agreements to share information with the AI Safety Institute on a voluntary basis, Kyle said in a February interview with the BBC that Labour would make that information-sharing agreement a “statutory code.”

Read More: To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy

“We would compel by law, those test data results to be released to the government,” Kyle said in the interview.

Timing regulation is a careful balancing act, says Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute.

“The art form is to be right on time with law. That means not too early, not too late,” she says. “The last thing that you want is a hastily thrown together policy that stifles innovation and does not protect human rights.”

Watchter says that striking the right balance on regulation will require the government to be in “constant conversation” with stakeholders such as those within the tech industry to ensure the government has an inside view of what is happening at the cutting edge of AI development when formulating policy. 

Kirsty Innes, director of technology policy at Labour Together points to the U.K. Online Safety Act, which was signed into law last October as a cautionary tale of regulation failing to keep pace with technology. The law, which aims to protect children from harmful content online, took 6 years from the initial proposal being made to finally being signed in.

“During [those 6 years] people’s experiences online transformed radically. It doesn’t make sense for that to be your main way of responding to changes in society brought by technology,” she says. “You’ve got to be much quicker about it now.”

Read More: The 3 Most Important AI Policy Milestones of 2023

There may be lessons for the U.K. to learn from the E.U. AI Act, Europe’s comprehensive regulatory framework passed in March, which will come into force on August 1 and become fully applicable to AI developers in 2026. Innes says that mimicking the E.U. is not Labour’s endgame. The European law outlines a tiered risk classification for AI use cases, banning systems deemed to pose unacceptable risks, such as social scoring systems, while placing obligations on providers of high-risk applications like those used for critical infrastructure. Systems said to pose limited or minimal risk face fewer requirements. Additionally, it sets out rules for “general-purpose AI”, which are systems with a wide range of uses, like those underpinning chatbots such as OpenAI’s ChatGPT. General-purpose systems trained on large amounts of computing power—such as GPT-4—are said to pose “systemic risk,” and developers will be required to perform risk assessments as well as track and report serious incidents.

“I think there is an opportunity for the U.K. to tread a nuanced middle ground somewhere between a very hands-off U.S. approach and a very regulatory heavy E.U. approach,” says Innes.

Read More: There’s an AI Lobbying Frenzy in Washington. Big Tech Is Dominating

In a bid to occupy that middle ground, Labour has pledged to create what it calls the Regulatory Innovation Office, a new government body that will aim to accelerate regulatory decisions.

“Part of the idea of the Regulatory Innovation Office is to help regulators develop the capacity that they need a bit quicker and to give them the kind of stimulus and the nudge to be more agile,” says Innes.

A ‘pro-innovation’ approach

In addition to helping the government respond more quickly to the fast-moving technology, Labour says the “pro-innovation” regulatory body will speed up approvals to help new technologies get licensed faster. The party said in its manifesto that it would implement AI into healthcare to “transform the speed and accuracy of diagnostic services, saving potentially thousands of lives.”

Healthcare is just one area where Kyle hopes to use AI. On July 8, he announced the revamp of the DSIT, which will bring on AI experts to explore ways to improve public services.

Meanwhile former Labour Prime Minister Tony Blair has encouraged the new government to embrace AI to improve the country’s welfare system. A July 9 report by his think tank the Tony Blair Institute for Global Change, concluded AI could save the U.K. Department for Work and Pensions more than $1 billion annually.

Blair has emphasized AI’s importance. “Leave aside the geopolitics, and war, and America and China, and all the rest of it. This revolution is going to change everything about our society, our economy, the way we live, the way we interact with each other,” Blair said, speaking on the Dwarkesh Podcast in June.

Read more: How a New U.N. Advisory Group Wants to Transform AI Governance

Modernizing public services is part of Labour’s wider strategy to leverage AI to grow the U.K. tech sector. Other measures include making it easier to set up data centers in the U.K., creating a national data library to bring existing research programs together, and offering decade-long research and development funding cycles to support universities and start-ups.

Speaking to business and tech leaders in London last March, Kyle said he wanted to support “the next 10 DeepMinds to start up and scale up here within the U.K.” 

Workers’ rights

Artificial intelligence-powered tools can be used to monitor worker performance, such as grading call center-employees on how closely they stick to the script. Labour has committed to ensuring that new surveillance technologies won’t find their way into the workplace without consultation with workers. The party has also promised to “protect good jobs” but, beyond committing to engage with workers, has offered few details on how. 

Read More: As Employers Embrace AI, Workers Fret—and Seek Input

“That might sound broad brush, but actually a big failure of the last government’s approach was that the voice of the workforce was excluded from discussions,” says Nicola Smith, head of rights at the Trades Union Congress, a union-group.  

While Starmer’s new government has a number of urgent matters to prioritize, from setting out its legislative plan for year one to dealing with overcrowded prisons, the way it handles AI could have far-reaching implications.

“I’m constantly saying to my own party, the Labour Party [that] ‘you’ve got to focus on this technology revolution. It’s not an afterthought,” Blair said on the Dwarkesh Podcast in June. “It’s the single biggest thing that’s happening in the world today.”

Data of Nearly All AT&T Customers Downloaded to Third-Party Platform in 2022 Security Breach

12 July 2024 at 13:19
AT&T Sate Breach

The data of nearly all customers of the telecommunications giant AT&T was downloaded to a third-party platform in a 2022 security breach, the company said Friday, in a year already rife with massive cyberattacks.

The breach hit customers of AT&T’s cellular customers, customers of mobile virtual network operators using AT&T’s wireless network, as well as its landline customers interacted with those cellular numbers.

[time-brightcove not-tgx=”true”]

A company investigation determined that compromised data includes files containing AT&T records of calls and texts between May 1, 2022 and Oct. 31, 2022.

AT&T has more than 100 million customers in the U.S. and almost 2.5 million business accounts.

The company said Friday that it has launched an investigation and engaged with cybersecurity experts to understand the nature and scope of the criminal activity.

“The data does not contain the content of calls or texts, personal information such as Social Security numbers, dates of birth, or other personally identifiable information,” AT&T said Friday.

The compromised data also doesn’t include some information typically seen in usage details, such as the time stamp of calls or texts, the company said. The data doesn’t include customer names, but the AT&T said that there are often ways, using publicly available online tools, to find the name associated with a specific telephone number.

AT&T said that it currently doesn’t believe that the data is publicly available.

The compromised data also includes records from Jan. 2, 2023, for a very small number of customers. The records identify the telephone numbers an AT&T or MVNO cellular number interacted with during these periods. For a subset of records, one or more cell site identification number(s) associated with the interactions are also included.

The company continues to cooperate with law enforcement on the incident and that it understands that at least one person has been apprehended so far.

The year has already been marked by several major data breaches, including an earlier attack on AT&T. In March AT&T said that a dataset found on the “dark web” contained information such as Social Security numbers for about 7.6 million current AT&T account holders and 65.4 million former account holders.

AT&T said at the time that it had already reset the passcodes of current users and would be communicating with account holders whose sensitive personal information was compromised.

There’s also been major disruptions at car dealerships in North America after software provider CDK Global faced back-to-back cyberattacks. And Alabama’s education superintendent said earlier this month that some data was “breached” during a hacking attempt at the Alabama State Department of Education.

Shares of AT&T Inc., based in Dallas, fell more than 2% before the markets opened on Friday.

European Union Says X’s Blue Checks Are Deceptive, Transparency Falls Short Under Social Media Law

12 July 2024 at 10:33
Elon Mush

LONDON — The European Union says blue checkmarks from Elon Musk’s X are deceptive and that the online platform falls short on transparency and accountability requirements in the first charges against a tech company since the bloc’s new social media regulations took effect.

The European Commission outlined on Friday the preliminary findings from its investigation into X, formerly known as Twitter, under the 27-nation bloc’s Digital Services Act.

The rulebook, also known as the DSA, is a sweeping set of regulations that requires platforms to take more responsibility for protecting users and cleaning up their sites.

Regulators took aim at X’s blue checks, saying they constitute “dark patterns” that are not in line with industry best practice and can be used by malicious actors to deceive users.

After Musk bought the site in 2022, it started issuing the verification marks to anyone who paid $8 per month for one. Before Musk’s acquisition, they mirrored verification badges common on social media and were largely reserved for celebrities, politicians and other influential accounts.

Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

10 July 2024 at 14:59
President Biden Delivers Remarks On Artificial Intelligence

On June 8, Republicans adopted a new party platform ahead of a possible second term for former President Donald Trump. Buried among the updated policy positions on abortion, immigration, and crime, the document contains a provision that has some artificial intelligence experts worried: it vows to scrap President Joe Biden’s executive order on AI.

[time-brightcove not-tgx=”true”]

“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology,” the platform reads.

Biden’s executive order on AI, signed last October, sought to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services. It requires developers of the most powerful AI systems to share their safety test results with the U.S. government and calls on federal agencies to develop guidelines for the responsible use of AI domains such as criminal justice and federal benefits programs.

Read More: Why Biden’s AI Executive Order Only Goes So Far

Carl Szabo, vice president of industry group NetChoice, which counts Google, Meta, and Amazon among its members, welcomes the possibility of the executive order’s repeal, saying, “It would be good for Americans and innovators.”

“Rather than enforcing existing rules that can be applied to AI tech, Biden’s Executive Order merely forces bureaucrats to create new, complex burdens on small businesses and innovators trying to enter the marketplace. Over-regulating like this risks derailing AI’s incredible potential for progress and ceding America’s technological edge to competitors like China,” said Szabo in a statement.

However, recent polling shared exclusively with TIME indicates that Americans on both sides of the political aisle are skeptical that the U.S. should avoid regulating AI in an effort to outcompete China. According to the poll conducted in late June by the AI Policy Institute (AIPI), 75% of Democrats and 75% of Republicans believe that “taking a careful controlled approach” to AI is preferable to “moving forward on AI as fast as possible to be the first country to get extremely powerful AI.”

Dan Hendrycks, director of the Center for Safe AI, says, “AI safety and risks to national security are bipartisan issues. Poll after poll shows Democrats and Republicans want AI safety legislation.”

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

The proposal to remove the guardrails put in place by Biden’s executive order runs counter to the public’s broad support for a measured approach to AI, and it has prompted concern among experts. Amba Kak, co-executive director of the AI Now Institute and former senior advisor on AI at the Federal Trade Commission, says Biden’s order was “one of the biggest achievements in the last decade in AI policy,” and that scrapping the order would “feel like going back to ground zero.” Kak says that Trump’s pledge to support AI development rooted in “human flourishing” is a subtle but pernicious departure from more established frameworks like human rights and civil liberties.

Ami Fields-Meyer, a former White House senior policy advisor on AI who worked on Biden’s executive order, says, “I think the Trump message on AI is, ‘You’re on your own,’” referring to how repealing the executive order would end provisions aimed at protecting people from bias or unfair decision-making from AI.

NetChoice and a number of think tanks and tech lobbyists have railed against the executive order since its introduction, arguing it could stifle innovation. In December, venture capitalist and prominent AI investor Ben Horowitz criticized efforts to regulate “math, FLOPs and R&D,” alluding to the compute thresholds set by Biden’s executive order. Horowitz said his firm would “support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.”

While Trump has previously accused tech companies like Google, Amazon, and Twitter of working against him, in June, speaking on Logan Paul’s podcast, Trump said that the “tech guys” in California gave him $12 million for his campaign. “They gave me a lot of money. They’ve never been into doing that,” Trump said.

The Trump campaign did not respond to a request for comment.

Even if Trump is re-elected and does repeal Biden’s executive order, some changes wouldn’t be felt right away. Most of the leading AI companies agreed to voluntarily share safety testing information with governments at an international summit on AI in Seoul last May, meaning that removing the requirements to share information under the executive order may not have an immediate effect on national security. But Fields-Meyer says, “If the Trump campaign believes that the rigorous national security safeguards proposed in the executive order are radical liberal ideas, that should be concerning to every American.”

Fields-Meyer says the back and forth over the executive order underscores the importance of passing federal legislation on AI, which “would bring a lot more stability to AI policy.” There are currently over 80 bills relating to AI in Congress, but it seems unlikely any of them will become law in the near future.

Sandra Wachter, a professor of technology regulation at the Oxford Internet Institute says Biden’s executive order was “a seminal step towards ensuring ethical AI and is very much on par with global developments in the UK, the EU, Canada, South Korea, Japan, Singapore and the rest of the world.” She says she worries it will be repealed before it has had a chance to have a lasting impact. “It would be a very big loss and a big missed opportunity if the framework was to be scrapped and AI governance to be reduced to a partisan issue,” she says. “This is not a political problem, this is a human problem—and a global one at that.”

Correction, July 11

The original version of this story misidentified a group that has spoken out against Biden’s executive order. It is NetChoice, not TechNet.