Normal view

There are new articles available, click to refresh the page.
Today — 22 October 2024Tech – TIME

Meta to Use Facial Recognition to Crack Down on Scams and Recover Locked-Out Accounts

22 October 2024 at 02:15

Facebook parent company Meta Platforms Inc. will start using facial recognition technology to crack down on scams that use pictures of celebrities to look more legitimate, a strategy referred to as “celeb-bait ads.”

Scammers use images of famous people to entice users into clicking on ads that lead them to shady websites, which are designed to steal their personal information or request money. Meta will start using facial recognition technology to weed out these ads by comparing the images in the post with the images from a celebrity’s Facebook or Instagram account.

[time-brightcove not-tgx=”true”]

“If we confirm a match and that the ad is a scam, we’ll block it,” Meta wrote in a blog post. Meta did not disclose how common this type of scam is across its services.

With nearly 3.3 billion daily active users across all of its apps, Meta relies on artificial intelligence to enforce many of its content rules and guidelines. That has enabled Meta to better handle the deluge of daily reports about spam and other content that breaks the rules. It has also led to problems in the past when legitimate accounts have been unintentionally suspended or blocked due to automated errors.

Read More: The Face Is the Final Frontier of Privacy

Meta says it will also start using facial recognition technology to better assist users who get locked out of their accounts. As part of a new test, some users can submit a video selfie when they’ve been locked out of their accounts. Meta will then compare the video to the photos on the account to see if there is a match. 

Meta has previously asked locked-out users to submit other forms of identity verification, like an ID card or official certificate, but says that the video selfie option would only take a minute to complete. Meta will “immediately delete any facial data generated after this comparison regardless of whether there’s a match or not,” the company wrote in a blog.

The social networking giant has a complicated history with facial recognition technology. It previously used facial recognition to identify users in uploaded photos as a way to encourage people to tag their friends and increase connections. Meta was later sued by multiple U.S. states for profiting off this technology without user consent, and in 2024 was ordered to pay the state of Texas $1.4 billion as part of the claim. Several years earlier, it agreed to pay $650 million in a separate legal suit filed in Illinois.

The company will not run this video selfie test in Illinois or Texas, according to Monika Bickert, Meta’s vice president of content policy. 

Before yesterdayTech – TIME

October Is Cybersecurity Awareness Month. Here’s How to Stay Safe From Scams

19 October 2024 at 13:13
Financial Wellness Cybersecurity Awareness Month

NEW YORK — October is Cybersecurity Awareness Month, which means it’s the perfect time to learn how to protect yourself from scams.

“Scams have become so sophisticated now. Phishing emails, texts, spoofing caller ID, all of this technology gives scammers that edge,” said Eva Velasquez, president and CEO of the Identity Theft Resource Center.

[time-brightcove not-tgx=”true”]

As scammers find new ways to steal money and personal information, consumers should be more vigilant about who they trust, especially online. A quick way to remember what to do when you think you’re getting scammed is to think about the three S’s, said Alissa Abdullah, also known as Dr. Jay, Mastercard’s deputy chief security officer

“Stay suspicious, stop for a second (and think about it) and stay protected,” she said.

Whether it’s romance scams or job scams, impersonators are looking for ways to trick you into giving them money or sharing your personal information. Here’s what to know:

Know scammers’ tactics

Three common tactics used by scammers are based on fear, urgency and money, said security expert Petros Efstathopoulos. Here’s how they work:

— Fear

When a scammer contacts you via phone or email, they use language that makes it seem like there is a problem that you need to solve. For example, a scammer contacts you over email telling you that your tax return has an error and if you don’t fix it you’ll get in trouble.

— Urgency

Because scammers are good at creating a sense of urgency, people tend to rush, which makes them vulnerable. Scammers often tell people they need to act right away, which can lead to them sharing private information such as their Social Security numbers.

— Money

Scammers use money as bait, Efstathopoulos said. They might impersonate tax professionals or the IRS saying you will get a bigger tax refund than you expect if you pay them for their services or share your personal information.

Know the most common scams

Simply being aware of typical scams can help, experts say. Robocalls in particular frequently target vulnerable individuals like seniors, people with disabilities, and people with debt.

“If you get a robocall out of the blue paying a recorded message trying to get you to buy something, just hang up,” said James Lee, chief operating officer at the Identity Theft Resource Center. “Same goes for texts — anytime you get them from a number you don’t know asking you to pay, wire, or click on something suspicious.”

Lee urges consumers to hang up and call the company or institution in question at an official number.

Scammers will also often imitate someone in authority, such as a tax or debt collector. They might pretend to be a loved one calling to request immediate financial assistance for bail, legal help, or a hospital bill.

Romance scams

So-called “romance scams” often target lonely and isolated individuals, according to Will Maxson, assistant director of the Division of Marketing Practices at the FTC. These scams can take place over longer periods of time — even years.

Kate Kleinart, 70, who lost tens of thousands to a romance scam over several months, said to be vigilant if a new Facebook friend is exceptionally good-looking, asks you to download WhatsApp to communicate, attempts to isolate you from friends and family, and/or gets romantic very quickly.

“If you’re seeing that picture of a very handsome person, ask someone younger in your life — a child, a grandchild, a niece or a nephew — to help you reverse-image search or identify the photo,” she said.

She said the man in pictures she received was a plastic surgeon from Spain whose photos have been stolen and used by scammers.

Kleinart had also been living under lockdown during the early pandemic when she got the initial friend request, and the companionship and communication meant a lot to her while she was cut off from family. When the scam fell apart, she missed the relationship even more than the savings.

“Losing the love was worse than losing the money,” she said.

Job scams

Job scams involve a person pretending to be a recruiter or a company in order to steal money or information from a job seeker.

Scammers tend to use the name of an employee from a large company and craft a job posting that matches similar positions. An initial red flag is that scammers usually try to make the job very appealing, Velasquez said.

“They’re going to have very high salaries for somewhat low-skilled work,” she said. “And they’re often saying it’s a 100% remote position because that’s so appealing to people.”

Some scammers post fake jobs, but others reach out directly to job seekers through direct messages or texts. If the scammers are looking to steal your personal information, they may ask you to fill out several forms that include information like your Social Security number and driver’s license details.

The only information a legitimate employer should ask for at the beginning of the process is your skills, your work experience, and your contact information, Velasquez said.

Other details don’t generally need to be shared with an employer until after you’ve gotten an offer.

Investment scams

According to Lois Greisman, an associate director of marketing practices at the Federal Trade Commission, an investment scam constitutes any get-rich-quick scheme that lures targets via social media accounts or online ads.

Investment scammers typically add different forms of “testimony,” such as from other social media accounts, to support that the “investment” works. Many of these also involve cryptocurrency. To avoid falling for these frauds, the FTC recommends independently researching the company — especially by searching the company’s name along with terms like “review” or “scam.”

Quiz scams

When you’re using Facebook or scrolling Google results, be aware of quiz scams, which typically appear innocuous and ask about topics you might be interested in, such as your car or favorite TV show. They may also ask you to take a personality test.

Despite these benign-seeming questions, scammers can then use the personal information you share to respond to security questions from your accounts or hack your social media to send malware links to your contacts.

To protect your personal information, the FTC simply recommends steering clear of online quizzes. The commission also advises consumers to use random answers for security questions.

“Asked to enter your mother’s maiden name? Say it’s something else: Parmesan or another word you’ll remember,” advises Terri Miller, consumer education specialist at the FTC. “This way, scammers won’t be able to use information they find to steal your identity.”

Marketplace scams

When buying or selling products on Instagram or Facebook Marketplace, keep in mind that not everyone that reaches out to you has the best intentions.

To avoid being scammed when selling via an online platform, the FTC recommends checking buyers’ profiles, not sharing any codes sent to your phone or email, and avoiding accepting online payments from unknown persons.

Likewise, when buying something from an online marketplace, make sure to diligently research the seller. Take a look at whether the profile is verified, what kind of reviews they have, and the terms and conditions of the purchase.

Don’t pick up if you don’t know who is calling

Scammers often reach out by phone, Ben Hoffman, Head of Strategy and Consumer Products at Fifth Third Bank recommends that you don’t pick up unknown incoming calls.

“Banks don’t ask your for your password,” said Hoffman. If you believe your bank is trying to reach out, give them a call at a number listed on their website.

This makes it easier to know for sure that you’re not talking to a scammer. As a general rule, banks don’t often call unless there is suspicious activity on your account or if you previously contacted them about a problem.

If you receive many unknown calls that end up being scammers or robocalls, you can use tools available on your phone to block spam. Check here for how to do this on your iPhone and here for Android.

Use all of the technology at your disposal

There are many tools are your disposal that can be used to protect yourself from scammers online.

— Use a password manager to ensure you’re utilizing a complex password that scammers can’t guess.

— Regularly checking your credit report and bank statements is a good practice since it can help you identify if someone has been using your bank account without your knowledge.

— Turn on multi-factor verification to make sure impersonators aren’t able to access your social media or bank accounts.

When in doubt, call for help

As scams get more sophisticated, it’s difficult to know who to trust or if a person is actually real, or an impersonator. If you aren’t sure if a job recruiter is real or if your bank is actually asking your for information, find organizations that can help you, recommended Velasquez.

Organizations like the Identity Theft Protection Center and the AARP Fraud Watch Network offer free services for customers who need help identifying scams or knowing what to do if you’ve been a victim of a scam.

Share what you know with loved ones

If you’ve taken all the necessary steps to protect yourself, you might want to help those around you. Whether you’re helping your grandparents to block unknown callers on their phones or sharing tips with your neighbors, talking with others about how to protect themselves from scams can be very effective.

Report the scam

If you or a family member is a victim of a scam, it’s good practice to report it on the FTC’s website.

Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

18 October 2024 at 11:10
3D Render of Futuristic  AI Data Icon Glass Cubes. AI Innovation and Cloud Technology

Artificial General Intelligence—machines that can learn and perform any cognitive task that a human can—has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it’s an impending reality that demands our immediate attention.

On Sept. 17, during a Senate Judiciary Subcommittee hearing titled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified that, “The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.” She continued that leading AI companies such as OpenAI, Google, and Anthropic are “treating building AGI as an entirely serious goal.”

[time-brightcove not-tgx=”true”]

Toner’s co-witness William Saunders—a former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsibly—echoed similar sentiments to Toner, testifying that, “Companies like OpenAI are working towards building artificial general intelligence” and that “they are raising billions of dollars towards this goal.”

Read More: When Might AI Outsmart Us? It Depends Who You Ask

All three leading AI labs—OpenAI, Anthropic, and Google DeepMind—are more or less explicit about their AGI goals. OpenAI’s mission states: “To ensure that artificial general intelligence—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Anthropic focuses on “building reliable, interpretable, and steerable AI systems,” aiming for “safe AGI.” Google DeepMind aspires “to solve intelligence” and then to use the resultant AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “human-level AI will be passed in the mid-2020s.” New entrants into the AI race, such as Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI.

Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month’s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are “folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don’t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.”

Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. “The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now—one to three years has been the latest prediction,” he said. He didn’t mince words about where responsibility lies: “What we should learn from social media, that experience is, don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI’s imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed “within the next 5 years.” Some 82% of respondents also said we should “go slowly and deliberately” in AI development.

That’s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of “novel biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario AGI “could lead to literal human extinction.”

Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

Read More: What an American Approach to AI Regulation Should Look Like

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

No one knows how long we have until AGI—what Senator Blumenthal referred to as “the 64 billion dollar question”—but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years.

Ignoring the potentially imminent challenges of AGI won’t make them disappear. It’s time for policymakers to begin to get their heads out of the cloud.

TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a ‘Relic of the Past’

17 October 2024 at 01:03

On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME’s 2023 and 2024 100 Influential People in AI lists—all of whom are playing a role in shaping the future of the technology.

[time-brightcove not-tgx=”true”]

Following a discussion between TIME’s CEO Jessica Sibley and executives from the event’s sponsors—Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia’s VP of Europe, the Middle East, and Africa—and after the main course had been served, attention turned to a panel discussion.

The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI’s impacts, and the potential of AI-generated videos to transform how we communicate.

The panelists’ views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to “get our heads around the empirical data that will tell us much more about what’s coming down the pike and what kind of risks are associated with it.”

Birhane, meanwhile, emphasized what she sees as the “massive hype” around AI’s capabilities and potential to pose existential risk. “These models don’t actually live up to their claims.” Birhane argued that “AI is not just computational calculations. It’s the entire pipeline that makes it possible to build and to sustain systems,” citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. “There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline,” she said. Riparbelli suggested that both “fixing the problems already in society today” and thinking about “Terminator-style scenarios” are important, and worth paying attention to.

Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, “best practices for how you deal with different safety challenges change very quickly,” Leung said, pointing to a “big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.”

The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. “There’s a huge onus on companies to continue to innovate on safety practices,” said Leung. Riparbelli agreed, suggesting companies may have a “moral imperative” to ensure their systems are safe. At the same time, “governments have to play a role here. That’s completely non-negotiable,” said Leung.

Equally, Birhane was clear that “effective regulation” based on “empirical evidence” is necessary. “A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain,” she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. “Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.”

When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. “What are the best practices, standards, and protocols that we want to harmonize across jurisdictions?” she asked. “It’s not a sufficiently-resourced question.” Still, Leung pointed to the fact that China was party to last year’s AI Safety Summit hosted by the U.K. as cause for optimism. “It’s very important to make sure that they’re around the table,” she said. 

One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his company’s technology, Riparbelli’s AI double declared “text as a technology is ultimately transitory and will become a relic of the past.” Expanding on the thought, the real Riparbelli said: “We’ve always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.” 

He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will “enable anyone to create a Hollywood film from their bedroom without needing more than their imagination,” he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that “every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.”

Riparbelli suggests that rather than a “technology-centric” approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. “Let’s focus on the things we don’t want to happen and regulate around those.”

The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.

Why Surgeons Are Wearing The Apple Vision Pro In Operating Rooms

15 October 2024 at 16:41
UC San Diego

Twenty-four years ago, the surgeon Santiago Horgan performed the first robotically assisted gastric-bypass surgery in the world, a major medical breakthrough. Now Horgan is working with a new tool that he argues could be even more transformative in operating rooms: the Apple Vision Pro.

Over the last month, Horgan and other surgeons at the University of California, San Diego have performed more than 20 minimally invasive operations while wearing Apple’s mixed-reality headsets. Apple released the headsets to the public in February, and they’ve largely been a commercial flop. But practitioners in some industries, including architecture and medicine, have been testing how they might serve particular needs. 

[time-brightcove not-tgx=”true”]

Horgan says that wearing headsets during surgeries has improved his effectiveness while lowering his risk of injury—and could have an enormous impact on hospitals across the country, especially those without the means to afford specialty equipment. “This is the same level of revolution, but will impact more lives because of the access to it,” he says, referring to his previous breakthrough in 2000.

Read More: How Virtual Reality Could Transform Architecture.

Horgan directs the Center for the Future of Surgery at UC San Diego, which explores how emerging technology might improve surgical processes. In laparoscopic surgery, doctors send a tiny camera through a small incision in a patient’s body, and the camera’s view is projected onto a monitor. Doctors must then operate on a patient while looking up at the screen, a tricky feat of hand-eye coordination, while processing other visual variables in a pressurized environment. 

“I’m usually turning around and stopping the operation to see a CT scan; looking to see what happened with the endoscopy [another small camera that provides a closer look at organs]; looking at the monitor for the heart rate,” Horgan says.

As a result, most surgeons report experiencing discomfort while performing minimal-access surgery, a 2022 study found. About one-fifth of surgeons polled said they would consider retiring early because their pain was so frequent and uncomfortable. A good mixed-reality headset, then, might allow a surgeon to look at a patient’s surgical area and, without looking up, virtual screens that show them the laparoscopy camera and a patient’s vitals.

In previous years, Horgan tried other headsets, like Google Glass and Microsoft HoloLens, and found they weren’t high-resolution enough. But he tested the Apple Vision Pro before its release and was immediately impressed. Horgan applied for approval from the institutional review board at the University of California, which green-lit the use of the devices. In September, he led the first surgery with the Apple headset, for a paraesophageal hernia. “We are all blown away: It was better than we even expected,” Horgan says.

In the weeks since, UC San Diego’s minimally invasive department has performed more than 20 surgeries with the Apple Vision Pro, including acid-reflux surgery and obesity surgery. Doctors, assistants, and nurses all don headsets during the procedures. No patients have yet opted out of the experiment, Horgan says. 

Christopher Longhurst, chief clinical and innovation officer at UC San Diego Health, says that while the Vision Pro’s price tag of $3,499 might seem daunting to a regular consumer, it’s inexpensive compared to most medical equipment. “The monitors in the operating room are probably $20,000 to $30,000, ” he says. “So $3,500 for a headset is like budget dust in the healthcare setting.” This price tag could make it especially appealing to smaller community hospitals that lack the budget for expensive equipment. (The FDA has yet to approve the device for widespread medical use.)

Longhurst is also testing the ability of the Apple Vision Pro to create 3D radiology imaging. Over the next couple of years, he expects the team at UC San Diego to release several papers documenting the efficacy of headsets in different medical applications. “We believe that it’s going to be standard of care in the next years to come, in operating rooms all over the world,” Longhurst says.

Apple Vision Pro is not the only device competing for the attention of surgeons. There are other surgical visualization systems on the market promising similar benefits. The startup Augmedics developed an AR navigation system for spinal surgeons, which superimposes a 3D image of a patient’s CT scan over their body, theoretically allowing the doctor to operate as if they had X-ray vision. Another company, Vuzix, offers headsets that are significantly lighter than the Vision Pro, and allow a surgeon anywhere in the world to view an operating surgeon’s viewpoint and give them advice.

Ahmed Ghazi, the director of minimally invasive and robotic surgery at Johns Hopkins in Baltimore, has used Vuzix headsets for remote teaching, allowing trainees to see from a proctor’s viewpoint. He recently used the Microsoft HoloLens to give a patient a “surgical rehearsal” of her operation: both donned headsets, and he guided her through a virtual 3D recreation of her CT scan, explaining how he would remove her tumor. “We were able to walk her through the process: ‘I’m going to find the feeding vessel to the tumor, clip it, dissect away from here, make sure I don’t injure this,’” he says. “There is a potential for us to bring patients to that world, to give them better understanding.” 

Ghazi says that as these headsets are increasingly brought into operating rooms, it’s crucial for doctors to take precautions, especially around patient privacy. “Any device that is connected to a network or WiFi signal, has the potential to be exposed or hacked,” he says. “We have to be very diligent about what we’re doing and how we’re doing it.”

Read More: How Meteorologists Use AI to Forecast Storms.

Miguel Burch, who leads the general surgery division at Cedars-Sinai Medical Center in Los Angeles, has tested a variety of medical-focused headsets over the years. He says that the Apple Vision Pro is especially useful because of its adaptability. “If everything we wanted to use in augmented reality is proprietarily attached to a different device, then we have 10 headsets and 15 different monitors,” Burch says. “But with this one, you can use it with anything that has a video feed.”

Burch says he’s sustained three different injuries over the course of his career from performing minimally-invasive surgeries. He now hopes to bring the Apple Vision Pro to Cedars-Sinai, and believes that the headset’s current medical functions are the “tip of the iceberg.” “Not only is it ergonomically a solution to the silent problem of surgeons having to end their careers earlier,” he says, “but the ability to have images overlap is going to tremendously improve what we can do.”

I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

13 October 2024 at 11:00
Wall clock in office desk with big sunset sun light effect.

If uncontrolled artificial general intelligence—or “God-like” AI—is looming on the horizon, we are now about halfway there. Every day, the clock ticks closer to a potential doomsday scenario.

That’s why I introduced the AI Safety Clock last month. My goal is simple: I want to make clear that the dangers of uncontrolled AGI are real and present. The Clock’s current reading—29 minutes to midnight—is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks. While no catastrophic harm has happened yet, the breakneck speed of AI development and the complexities of regulation mean that all stakeholders must stay alert and engaged.

[time-brightcove not-tgx=”true”]

This is not alarmism; it’s based on hard data. The AI Safety Clock tracks three essential factors: the growing sophistication of AI technologies, their increasing autonomy, and their integration with physical systems. 

We are seeing remarkable strides across these three factors. The biggest are happening in machine learning and neural networks, with AI now outperforming humans in specific areas like image and speech recognition, mastering complex games like Go, and even passing tests such as business school exams and Amazon coding interviews.

Read More: Nobody Knows How to Safety-Test AI

Despite these advances, most AI systems today still depend on human direction, as noted by the Stanford Institute for Human-Centered Artificial Intelligence. They are built to perform narrowly defined tasks, guided by the data and instructions we provide.

That said, some AI systems are already showing signs of limited independence. Autonomous vehicles make real-time decisions about navigation and safety, while recommendation algorithms on platforms like YouTube and Amazon suggest content and products without human intervention. But we’re not at the point of full autonomy—there are still major hurdles, from ensuring safety and ethical oversight to dealing with the unpredictability of AI systems in unstructured environments.

At this moment, AI remains largely under human control. It hasn’t yet fully integrated into the critical systems that keep our world running—energy grids, financial markets, or military weapons—in a way that allows it to operate autonomously. But make no mistake, we are heading in that direction. AI-driven technologies are already making gains, particularly in the military with systems like autonomous drones, and in civilian sectors, where AI helps optimize energy consumption and assists with financial trading.

Once AI gets access to more critical infrastructures, the risks multiply. Imagine AI deciding to cut off a city’s power supply, manipulate financial markets, or deploy military weapons—all without any, or limited, human oversight. It’s a future we cannot afford to let materialize.

But it’s not just the doomsday scenarios we should fear. The darker side of AI’s capabilities is already making itself known. AI-powered misinformation campaigns are distorting public discourse and destabilizing democracies. A notorious example is the 2016 U.S. presidential election, during which Russia’s Internet Research Agency used automated bots on social media platforms to spread divisive and misleading content.

Deepfakes are also becoming a serious problem. In 2022, we saw a chilling example when a deepfake video of Ukrainian President Volodymyr Zelensky emerged, falsely portraying him calling for surrender during the Russian invasion. The aim was clear: to erode morale and sow confusion. These threats are not theoretical—they are happening right now, and if we don’t act, they will only become more sophisticated and harder to stop.

While AI advances at lightning speed, regulation has lagged behind. That is especially true in the U.S., where efforts to implement AI safety laws have been fragmented at best. Regulation has often been left to the states, leading to a patchwork of laws with varying effectiveness. There’s no cohesive national framework to govern AI development and deployment. California Governor Gavin Newsom’s recent decision to veto an AI safety bill, fearing it would hinder innovation and push tech companies elsewhere, only highlights how far behind policy is.

Read More: Regulating AI Is Easier Than You Think

We need a coordinated, global approach to AI regulation—an international body to monitor AGI development, similar to the International Atomic Energy Agency for nuclear technology. AI, much like nuclear power, is a borderless technology. If even one country develops AGI without the proper safeguards, the consequences could ripple across the world. We cannot let gaps in regulation expose the entire planet to catastrophic risks. This is where international cooperation becomes crucial. Without global agreements that set clear boundaries and ensure the safe development of AI, we risk an arms race toward disaster.

At the same time, we can’t turn a blind eye to the responsibilities of companies like Google, Microsoft, and OpenAI—firms at the forefront of AI development. Increasingly, there are concerns that the race for dominance in AI, driven by intense competition and commercial pressures, could overshadow the long-term risks. OpenAI has recently made headlines by shifting toward a for-profit structure.

Artificial intelligence pioneer Geoffrey Hinton’s warning about the race between Google and Microsoft was clear: “I don’t think they should scale this up more until they have understood whether they can control it.”

Part of the solution lies in building fail-safes into AI systems—“kill-switches,” or backdoors that would allow humans to intervene if an AI system starts behaving unpredictably. California’s AI safety law included provisions for this kind of safeguard. Such mechanisms need to be built into AI from the start, not added in as an afterthought.

There’s no denying the risks are real. We are on the brink of sharing our planet with machines that could match or even surpass human intelligence—whether that happens in one year or ten. But we are not helpless. The opportunity to guide AI development in the right direction is still very much within our grasp. We can secure a future where AI is a force for good.

But the clock is ticking.

The AI Revolution Is Coming for Your Non-Union Job

The American Federation of Musicians (AFM), a union

During this election cycle, we’ve heard a lot from the presidential candidates about the struggles of America’s workers and their families. Kamala Harris and Donald Trump each want to claim the mantle as the country’s pro-worker candidate. Accordingly, union leaders took the stage not only at the Democratic National Convention, as usual, but at the Republican convention too.  At the VP debate, J.D. Vance and Tim Walz offered competing views on how best to support workers.

[time-brightcove not-tgx=”true”]

Surprisingly, one economic issue the candidates have yet to address is one in which millions of voters have a great deal at stake: the looming impact of new generative artificial intelligence (GenAI) technologies on work and livelihoods. The candidates’ silence belies a stark reality: the next president will take office in a world already changed by GenAI—and heading for much greater disruption.

Our new research at Brookings shows why this requires urgent attention and why it matters to voters. In a new study using data provided by one of the leading AI developers, OpenAI, we analyzed over a thousand occupations for their likely exposure to GenAI and its growing capabilities. Overall, we find that some 30% of the workforce could see at least half of their work tasks impacted—though not necessarily automated fully—by today’s GenAI, while more than 85% of all workers could see at least 10% of their tasks impacted. Even more powerful models are planned for release soon, with those requiring minimal human oversight likely to follow.

America’s workers are smart. They are far more concerned about GenAI reshaping livelihoods than leaders in government and business have acknowledged so far. In a 2023 Pew Center survey, nearly two-thirds (62%) of adults say they believe GenAI will have a major impact on jobs and jobholders—mostly negative—over the next two decades.

Yet technology is not destiny. AI capabilities alone will not determine the future of work. Workers, rather, can shape the trajectory of AI’s impact on work—but only if they have a voice in the technology’s design and deployment.

Who will be most affected by GenAI? Most of us will probably be surprised. We tend to think of men in blue-collar, physical roles in factories and warehouses as the workers most exposed to automation, and frequently they have been, along with dock workers and others. Yet GenAI, and the related software systems it integrates with, turn these assumptions on their head: manually intensive blue-collar roles are likely to be least and last affected. The same applies to electricians, plumbers and other relatively well-paying skilled trades occupations boosted by the nation’s net zero transition and massive investments in infrastructure. Instead, it is knowledge work: creative occupations, and office-based roles that are most exposed to technologies like ChatGPT and DALL-E, at least in the near term.

It is also women, not men, who face the greatest risk of disruption and automation. This is especially true of women in middle-skill clerical roles—currently nearly 20 million jobs—that have long offered a measure of economic security for workers without advanced degrees, for example in roles such as HR assistant, legal secretary, bookkeeper, customer service agent, and many others. The stakes are high for this racially and ethnically diverse group of lower-middle-class women, many of whom risk falling into more precarious, lower-paid work if this work is displaced.

Read More: How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves

All of this raises the question of what it will take to make sure most workers gain, rather than lose, from AI’s uncanny and often impressive capabilities. To be sure, we can’t predict the speed and scale of future AI advances. But what is clear is that the design and deployment of generative AI technologies is moving far faster than our response to shaping it. Fundamental questions, which the next president and Congress will need to address, remain unanswered: How do we ensure workers can proactively shape AI’s design and deployment? What will it take to ensure workers benefit meaningfully from AI’s strengths? And what guardrails are needed for workers to avoid AI’s harmsas much as possible?

Here’s a key issue: Among the most pressing priorities for the next president to address is what we call the “Great Mismatch,” the reality that the occupations most likely to see disruptions from AI are also the least likely to employ workers who belong to a union or have other forms of voice and representation.

In an era of technological change, Americans are clear about the benefits of unions. According to new Gallup polling, 70% of Americans hold a positive view of unions—the highest approval in 60 years. And both Harris and Trump have aggressively courted unions in their campaigns. Yet in the sectors where GenAI is poised to create the most change, as few as 1% of workers benefit from union representation (the public sector workforce is a notable exception).

This stark mismatch poses a serious risk for workers. In 2023, Hollywood writers showed the country why collective worker power is so critical in an era of technological disruption. Concerned that technology like ChatGPT could threaten their livelihoods, thousands of writers went on strike for five months. By securing first-of-their-kind protections in the contract they negotiated with major studios, the writers set a historic precedent: it is now up to the writerswhether and how they use generative AI as a tool to assist and complement—not replace—them.

Writer Raphael Bob-Waksberg, creator of the show BoJack Horseman, said, of his union’s AI victories and what they could mean for other workers, “Workers are going to demand similar things in their industries, because this affects all different kinds of people … I think it’s going to require unions. I think you can create some guardrails around it and use political power and worker power to protect people.”

The lack of worker voice and influence over deployment of GenAI should be a core concern for workers and policymakers alike—but it should get employers’ attention too.

Research shows there are big benefits to companies from incorporating workers and their unique knowledge and insights into the design and rollout of new technologies, compared to top-down implementation. Which means there is a powerful business case for worker engagement.

For now, almost none of the developers and deployers of AI are engaging workers or viewing them as uniquely capable partners. To the contrary, at least in private, many business leaders convey a sense of inevitability at the mention of AI’s growing risks for workers and their livelihoods. It’s no secret that relentless pressure to maximize short-term earnings, especially for publicly traded companies, focuses many CEOs on cutting labor costs in every way possible. It remains to be seen whether the coming AI revolution will defy the fixation on “lean and mean” operations, which came to dominate American corporate strategy a generation ago.

Presidential elections offer voters a referendum on the past as well as the future, even if the latter is only partly visible for now. AI represents one of the great challenges of our time, posing both risks and opportunities for the American worker. The next president will need to help determine the policies, investments, guardrails and social protections—or lack of same—that will shape the future of work for millions of Americans. It’s time we learned whether the candidates for that office understand that.

How Meteorologists Are Using AI to Forecast Hurricane Milton and Other Storms

9 October 2024 at 21:23
Hurricane Beryl Impacts Texas Coastline

On Wednesday evening, Hurricane Milton will become the fifth hurricane in 2024 to make landfall in the mainland U.S. As storms like this one grow more frequent and intense, artificial intelligence is playing an increasingly central role in efforts by meteorologists and other scientists to track these storms and mitigate their harms. 

For years, meteorologists have built complex forecasting models of storms based on wind speeds, temperature, humidity and other factors, and recorded via readings from planes, buoys and satellites. But these models can take hours to produce updated forecasts. 

[time-brightcove not-tgx=”true”]

Machine learning models, on the other hand, draw upon vast knowledge of the earth’s atmosphere and data from how previous storms have unfolded. They excel at pattern recognition, teasing out trends that most humans can’t discern in a fraction of the time. And this year, they have repeatedly offered accurate storm-related predictions, generated within seconds, and days in advance of a storm hitting a coast. 

“The meteorology community is, in some cases reluctantly, and in some cases fully embracing, AI modeling,” says the Houston-based meteorologist Matt Lanza. “In terms of hurricanes, we’ve learned that the AI modeling can go toe-to-toe with the physics-based model, so you have to use it.” 

Read More: Here’s What You Need to Know About Hurricane Milton’s Expected Path

Lanza says that this week, a consensus has emerged among many different types of models that Milton will probably land between Clearwater and Sarasota, Florida. AI modeling, Lanza says, “probably picked up on that potential outcome a good 12 to 18 hours before a lot of the other modeling.”  

AI’s accuracy tracking storms

This isn’t the first time that AI models have predicted a hurricane’s path this year before traditional models. GraphCast, created by the Google AI company Deepmind, trained an AI program on four decades of global weather data, and correctly predicted that Beryl, the first major Atlantic hurricane in 2024, would make landfall in Texas, as opposed to a top European model predicting a Mexican landfall. The team behind the project won Britain’s top engineering prize this year, with one of the judges calling it “a revolutionary advance.”

A couple months later, a European AI model called AIFS successfully predicted the path of Francine as it hit the Gulf Coast. “The consistency was incredible,” Lanza says. “Even the best performing traditional models I don’t believe were this consistent.” Lanza wrote on his blog that the model’s accuracy gave his team confidence that the storm would not be a major concern for Texas, which allowed people on the ground to plan and marshall resources more appropriately. 

Other major forecasting models include FourCastNet, developed by NVIDIA, and Pangu-Weather, from Huawei. The National Hurricane Center (NHC), for its part, has integrated AI in its forecasting processes, with NHC Deputy Director Jamie Rhome calling it a “pillar” of success last year. “The sophistication of AI has dramatically improved and it continues to improve, and that’s critical because we only have three hours to make the forecast,” he told NBC Miami.

Despite its success, the technology still has plenty of hiccups. A 2024 study found that while machine learning models effectively forecasted large-scale features of the European windstorm Ciarán, they failed to register damaging surface winds and other unusual aspects of the storm. Lanza says that AI models tend to underestimate the intensity of hurricanes and sometimes struggle with gauging precipitation. 

Because of these errors, Lanza says it’s crucial for meteorologists not to solely rely on AI forecasting. “We’re not turning the reins over to these things and just saying, ‘make me a forecast and I’ll just regurgitate it,’” he says. “You have to still look at the broader spectrum of tools available to you.”

Predicting storm surge

At the University of Florida, AI scientist Zhe Jiang is working to solve one of these more granular problems in accuracy: how storm surges will affect Florida’s coasts. Jiang says that AI for coastal modeling has lagged behind more global weather forecasting, due to the lack of high-quality training data and the fact that data-driven neural networks are often unaware of fundamental physical principles, like how water will move or disperse. 

To move this field forward, Jiang and his colleagues, including coastal oceanographers, have been training an AI surrogate based on coastal simulations. According to preliminary results, this AI has created forecasts for ocean currents 500 times faster than previous models. Jiang hopes to train the model for storm surges soon, which he hopes could save lives and prevent property damage. “In current forecasting, it may take a few hours to make a forecast. If we reduce the time to seconds, disaster managers can plan more time ahead and have more people better plan for the potential damage,” Jiang says. 

But Jiang is careful to note that simply using general-purpose AI models to predict storm surges could have disastrous consequences. “Neural networks sometimes make over-confident but inaccurate predictions, causing severe consequences in high-stake decision-making,” he says. 

Many other researchers have embarked on parallel projects. A researcher at the University of Miami is training computers in the hopes of building 3D replicas of active storm systems, so that planes don’t have to fly into them to take readings. Another company is using machine learning to try to predict where power outages will happen, and how many residents may be affected. 

Jiang says facing hurricanes like Milton makes his work all the more urgent: “There are more and more severe events and coastal hazards like Hurricane Milton going on near my home, and we are really racing with time to develop AI technologies faster.” 

Documentarian Says He’s Solved the Mystery of Bitcoin’s Creator. Insiders Are Extremely Skeptical

9 October 2024 at 01:01
Todd

This article contains spoilers for Money Electric: The Bitcoin Mystery.

Who is Bitcoin’s founder, Satoshi Nakamoto? The question has perplexed and excited cryptocurrency fans ever since Bitcoin was created by someone with that username in 2009. Fans have endlessly theorized, debated and hunted for clues across the web, while investigative journalists have tried to unwind the mystery with no success. To Bitcoin acolytes, Satoshi’s identity matters because their ideas are imbued with near-religious significance: “It’s the immaculate conception,” Bitcoin investor Michael Saylor said this year. 

[time-brightcove not-tgx=”true”]

Satoshi, who has not publicly communicated in years, also sits on an enormous stash of Bitcoin: over one million of them, which is about 5% of the total supply and would make him worth around $60 billion: roughly the 25th richest person alive. His return to the markets could send enormous shocks through an already volatile ecosystem.

Now, a documentary filmmaker is arguing that he’s identified Satoshi—and contends that Bitcoin’s founder didn’t walk away at all, but rather has played a significant role in shaping the technology’s development. 

In Money Electric: The Bitcoin Mystery, which streams on October 8 on Max, filmmaker Cullen Hoback spends three years traveling the world with early Bitcoin mavens before reaching a conclusion: that Satoshi is Peter Todd, a 39-year-old Canadian Bitcoin developer, whose ideas and hot temper have earned him notoriety in the Bitcoin community.

In an email to TIME, Todd denied that he is Satoshi. I’m not Satoshi,” he wrote. “I discovered Bitcoin first from reading the whitepaper, as I’ve said publicly many times.

Four other early Bitcoiners who spoke to TIME expressed skepticism that Todd could be Satoshi, based on their knowledge of Todd’s coding ability and temperament. But Hoback is confident he’s come to the right conclusion. “People have a vision of who they want it to be: They want someone perfect, who matches their ideals,” Hoback says. “But this is where the evidence lies—and I think the case is so strong.”

Hoback Investigates the Cypherpunks

In 2021, Hoback’s docuseries about the QAnon conspiracy, Q: Into the Storm, ran on HBO. In it, Hoback makes a case that Ron Watkins, a former administrator of the social network 8Chan, is the conspiracy’s leader, Q. (Watkins has denied this.) After the series aired, Hoback says that the series’ executive producer Adam McKay—who directed The Big Short and executive-produced Succession—reached back out to Hoback with a suggestion for who he should unmask next. 

“Don’t say Satoshi,” Hoback remembers telling him. “It’s the most over-pitched and under-delivered story in the documentary space.” 

But Hoback was intrigued by the idea and decided to dive in. (McKay is an executive producer on this project as well.) To start, Hoback reached out to one of the few people that Satoshi actually cites in his original Bitcoin white paper: the British cryptographer Adam Back, a core member of the 1990s movement known as the Cypherpunks. The Cypherpunks were a group of libertarian-leaning technologists who feared the internet would allow governments to strip people of their privacy, and wanted to create technical solutions to preserve individual rights online. In 2002, Back created Hashcash, a system to limit email spam. Its cryptographic structure laid the seeds for Bitcoin’s own framework.

Read More: Inside the Health Crisis of a Texas Bitcoin Town.

Hoback spent some time with Back, and investigated whether Back himself might be Satoshi, as others have speculated. During one of their meetups in Latvia, Back introduced Hoback to Todd. With his hoodies and unkempt facial hair, Todd practically embodies the visual stereotype of a coder. He receives grants to conduct research and write code for various parts of the crypto ecosystem, and frequently gives talks at Bitcoin conferences. “If Adam Back introduces you to somebody, you pay attention: He has his reasons,” Hoback says. “I could just tell that there was something strange about their dynamic, which almost had a ‘spy versus spy’ quality to it.” 

Todd was an early adopter of Bitcoin. According to Matt Leising’s Out of the Ether, he attended the first Bitcoin meetup in Toronto in 2012, where Vitalik Buterin, the soon-to-be founder of Ethereum, was also in attendance. As Hoback talked to Todd and researched him, he found small clues pointing his way.

Todd had been interested in creating digital cash from an early age; as a teenager and self-professed “young libertarian” in 2001, he had emailed Back to ask him how Hashcash’s structure might be applied to a “real currency” with a “decentralized ‘central’ database.” Todd was Canadian; Satoshi used British/Canadian spellings of certain words like ““favour” and “neighbour,” but also the American/Canadian spelling of “realize.” Todd was a self-taught coder who was in graduate school for physics when Bitcoin was created—and when Hoback asked a programmer to assess Bitcoin’s code, they told him that it lacked polish, and was written as if “a physicist became a software engineer.”

Then, Hoback found what he considered a “smoking gun”: a thread from a Bitcoin forum in 2010, two days before Satoshi stopped posting on the site and largely disappeared from public life. In the thread, Satoshi wrote a few paragraphs proposing a highly technical change to Bitcoin’s code. A few hours later, Peter Todd—who was, at this point, a nobody in the Bitcoin community—responded with what appeared to be a slight correction: “Of course, to be specific, the inputs and outputs can’t match “exactly” if the second transaction has a transaction fee.”

When Hoback reread this post, he came to believe that Todd wasn’t correcting Satoshi, but was Satoshi: he had mistakenly logged into his personal account, Hoback believed, and written a post clarifying his previous message written under the pseudonym. A few years later, Todd would actually write and implement this solution that he had Satoshi were discussing, called “replace-by-fee,” into Bitcoin. 

When Hoback confronted Todd and Adam Back on camera about this post and told Todd about his theory that he was Satoshi, Todd denied it, calling it “ludicrous.” He also became visibly nervous, laughing and muttering under his breath. “His reaction is extremely telling,” Hoback says, “and Adam’s reaction, or his lack of saying anything, is almost as revealing as the evidence compiled up until that point.”

Hoback now says he’s “very, very confident” that Todd is Satoshi. “When I put together a list of why and why not it might be him, the ‘might not be him’ list was very short,” he says. (That list includes the question of why Todd didn’t simply delete his potentially incriminating post.)

Read More: Why Bitcoin Mining Companies Are Pivoting to AI.

In the documentary, Todd tells Hoback that if he were Bitcoin’s creator, he would have destroyed “the ability to prove that I was Satoshi.” In an email forwarded to TIME, Todd wrote that the quest to find Satoshi was not only “dumb,” but “dangerous,” and said his coding abilities aren’t at the level of Bitcoin’s code base.

Adam Back wrote on X after the trailer was released that the “documentary will presumably be wrong, as no one knows who Satoshi is.”

Insiders Cast Doubts Upon Todd

The Bitcoin community as a whole is incentivized to keep Satoshi anonymous: In 2021, Coinbase included Satoshi’s identification in a list of business risk factors. Many Bitcoiners have responded with anger to the HBO project’s very existence, arguing that Satoshi’s privacy should be respected and that he could be charged by governments for violating securities laws or threatening national security if identified.

Over Bitcoin’s 15-year history, similar attempts to unmask Satoshi have been met with fierce backlash. “The hero-founder cult in crypto has caused nothing but problems,” says Austin Campbell, professor at Columbia Business School and the founder of a crypto consultant company. “The fact that Bitcoin was kind of put out there and then Satoshi vanished is integral to its success.”

Pointing to todd Todd will likely especially incense many insiders, some of whom believe Todd has actually hurt Bitcoin’s development. Much of the animosity towards Todd comes from his role in a conflict known as the block size wars, in which Bitcoin enthusiasts split into two camps over how to best scale bitcoin for consumer growth. Todd, along with Adam Back and Back’s company Blockstream, argued against implementing a “hard fork” of Bitcoin that would allow it to process transactions much faster. After a lengthy back and forth, Todd’s side won.

In July, a thread on a Bitcoin-focused subreddit filled with commenters criticizing Todd. “His organization subverted Bitcoin, preventing it from scaling,” one commenter wrote. “He caused sooooooo much damage to BTC,” another posted, referring to the replace-by-fee function that Todd had “discussed” with Satoshi back in 2010. “I don’t know why anyone gives him the time of day.”

If Todd is in fact Satoshi, as Hoback argues, then his role in the block size wars is significant, because it would show Bitcoin’s founder having an inordinate sway over Bitcoins’ future, despite the fact that it is supposed to be a decentralized, community-driven project. “You say it’s open-source, but Blockstream manipulated the ongoing development so they always had the thumb up the scale in their favor,” says Bryce Weiner, a Bitcoin developer who opposed Todd during the block size wars. Weiner, however, dismisses the idea that Peter Todd could be Satoshi. “He’s just somebody who knew how to engineer and fell into Bitcoin and got lucky,” he says.

Read More: The Prince of Crypto Has Concerns.

Samson Mow, a former executive of Blockstream who is featured prominently in the documentary, also doubts that Todd could have created Bitcoin. “He’s too contrarian to focus on building something as complex and involved,” he says.

Mike Hearn, one of Bitcoin’s earliest developers, emailed with Satoshi in 2010 and says there are several clues pointing to Satoshi being much older. Satoshi’s coding style, Hearn says, was antiquated for its time: “It suggested he came of age as a developer in the ’90s and then stopped: He did not keep up with the evolution of the industry,” he says. (Todd was 10 in 1995). Satoshi also referenced an obscure 1979 financial event—the Hunt brothers trying to corner the silver market—”as if he remembered it,” Hearn says.

Todd’s Social Media Presence

Todd maintains a divisive presence on Twitter, where he takes extreme right-wing views about issues like migrants in America and Russia’s invasion of Ukraine. “The Russian people are genocidal terrorists whose goal is to steal what others have. Our goal must be to exterminate them,” he wrote in July. “Kill them and you make the world better.”  He’s written that it would be strategically advantageous for Israel to bomb Lebanon’s hospitals and reposted conspiracy theories about migrants in Springfield, Ohio. 

Todd also uses social media and podcasts to criticize some of Satoshi’s ideas, which is rare in a community that usually accepts Satoshi’s ideas as gospel. When talking about Bitcoin fans who love Bitcoin’s hard-coded cap of 21 million coins, Todd said on a recent podcast: “They’ve bought in so hard to the 21 million meme that they just cannot accept that Satoshi might have screwed that one up.” In another Tweet, he contended: “The sigops mistake is evidence that Satoshi worked alone, and was in a rush.” 

And in 2015, Todd wrote: “I think Bitcoin is a great example of how sometimes world-changing ideas are actually pretty simple and don’t require you to be a world-class expert to come up with them, just someone with an open mind, a flash of brilliance, and a supportive community to fix the flaws and bring the idea to fruition.”

Hoback sees this as evidence in support of his theory. “His fixation on whether or not Satoshi got stuff right or wrong is telling,” Hoback says. “Think back on who you were 15 years ago—maybe you got some things wrong. But then people are like, ‘No, it’s the word of God, and we have to take it as gospel’: That would be pretty annoying.” 

While the evidence he presents is circumstantial, Hoback hopes the documentary will spur deeper investigations into a question that has bedeviled the crypto community for a decade and a half. “This conclusion is unexpected and it’s not who many people in the Bitcoin community want it to be,” Hoback says. “But maybe once they see the film and absorb the evidence, and then want to get closer to the answer, they’ll look into this as well.”

States Sue TikTok Over Children’s Mental Health

8 October 2024 at 14:25
European Union And IT Companies Photo Illustrations

(NEW YORK) — More than a dozen states and the District of Columbia have filed lawsuits against TikTok on Tuesday, alleging the popular short-form video app is harming youth mental health by designing its platform to be addictive to kids.

The lawsuits stem from a national investigation into TikTok, which was launched in March 2022 by a bipartisan coalition of attorneys general from many states, including California, Kentucky and New Jersey. All of the complaints were filed in state courts.

[time-brightcove not-tgx=”true”]

At the heart of each lawsuit is the TikTok algorithm, which powers what users see on the platform by populating the app’s main “For You” feed with content tailored to people’s interests. The lawsuits also emphasize design features that they say make children addicted to the platform, such as the ability to scroll endlessly through content, push notifications that come with built-in “buzzes” and face filters that create unattainable appearances for users.

Read More: As a Potential TikTok Ban Looms, Creators Worry About More Than Just Their Bottom Lines

In its filings, the District of Columbia called the algorithm “dopamine-inducing,” and said it was created to be intentionally addictive so the company could trap many young users into excessive use and keep them on its app for hours on end. TikTok does this despite knowing that these behaviors will lead to “profound psychological and physiological harms,” such as anxiety, depression, body dysmorphia and other long-lasting problems, the complaint said.

“It is profiting off the fact that it’s addicting young people to its platform,” District of Columbia Attorney General Brian Schwalb said in an interview.

Keeping people on the platform is “how they generate massive ad revenue,” Schwalb said. “But unfortunately, that’s also how they generate adverse mental health impacts on the users.”

TikTok does not allow children under 13 to sign up for its main service and restricts some content for everyone under 18. But Washington and several other states said in their filing that children can easily bypass those restrictions, allowing them to access the service adults use despite the company’s claims that its platform is safe for children.

Read More: Here’s All the Countries With TikTok Bans as Platform’s Future in U.S. Hangs In Balance

Their lawsuit also takes aim at other parts of the company’s business.

The district alleges TikTok is operating as an “unlicensed virtual economy” by allowing people to purchase TikTok Coins – a virtual currency within the platform – and send “Gifts” to streamers on TikTok LIVE who can cash it out for real money. TikTok takes a 50% commission on these financial transactions but hasn’t registered as a money transmitter with the U.S. Treasury Department or authorities in the district, according to the complaint.

Officials say teens are frequently exploited for sexually explicit content through TikTok’s LIVE streaming feature, which has allowed the app to operate essentially as a “virtual strip club” without any age restrictions. They say the cut the company gets from the financial transactions allows it to profit from exploitation.

Many states have filed lawsuits against TikTok and other tech companies over the past few years as a reckoning grows against prominent social media platforms and their ever-growing impact on young people’s lives. In some cases, the challenges have been coordinated in a way that resembles how states previously organized against the tobacco and pharmaceutical industries.

Read More: Column: The Grim Reality of Banning TikTok

Last week, Texas Attorney General Ken Paxton sued TikTok, alleging the company was sharing and selling minors’ personal information in violation of a new state law that prohibits these practices. TikTok, which disputes the allegations, is also fighting against a similar data-oriented federal lawsuit filed in August by the Department of Justice.

Several Republican-led states, such as Nebraska, Kansas, New Hampshire, Kansas, Iowa and Arkansas, have also previously sued the company, some unsuccessfully, over allegations it is harming children’s mental health, exposing them to “inappropriate” content or allowing young people to be sexually exploited on its platform. Arkansas has brought a legal challenge against YouTube, as well as Meta Platforms, which owns Facebook and Instagram and is being sued by dozens of states over allegations its harming young people’s mental health. New York City and some public school districts have also brought their own lawsuits.

TikTok, in particular, is facing other challenges at the national level. Under a federal law that took effect earlier this year, TikTok could be banned from the U.S. by mid-January if its China-based parent company ByteDance doesn’t sell the platform by mid-January.

Both TikTok and ByteDance are challenging the law at an appeals court in Washington. A panel of three judges heard oral arguments in the case last month and are expected to issue a ruling, which could be appealed to the U.S. Supreme Court.

How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves

8 October 2024 at 11:19
Technology

The Age of AI has also ushered in the Age of Debates About AI. And Yuval Noah Harari, author of Sapiens and Homo Deus, and one of our foremost big-picture thinkers about the grand sweep of humanity, history and the future, is now out with Nexus: A Brief History of Information Networks from the Stone Age to AI.

Harari generally falls into the AI alarmist category, but his thinking pushes the conversation beyond the usual arguments. The book is a look at human history through the lens of how we gather and marshal information. For Harari, this is essential, because how we use—and misuse—information is central to how our history has unfolded and to our future with AI.

[time-brightcove not-tgx=”true”]

In what Harari calls the “naïve view of information,” humans have assumed that more information will necessarily lead to greater understanding and even wisdom about the world. But of course, this hasn’t been true. “If we are so wise, why are we so self-destructive?” Harari asks. Why do we produce things that might destroy us if we can’t control them?

For Harari—to paraphrase another big-picture thinker—the fault, dear Brutus, is not in ourselves, but in our information networks. Bad information leads to bad decisions. Just as we’re consuming more and more addictive junk food, we’re also consuming more and more addictive junk information.

He argues that the problem with artificial intelligence is that “AI isn’t a tool—it’s an agent.” And unlike other tools of potential destruction, “AI can process information by itself, and thereby replace humans in decision making.” In some ways, this is already happening. For example, in the way Facebook was used in Myanmar—the algorithms had “learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage.”

Where I differ with Harari is that he seems to regard human nature as roughly fixed, and algorithms as inevitably exploiting human weaknesses and biases. To be fair, Harari does write that “as a historian I do believe in the possibility of change,” but that possibility of change at the individual level is swamped in the tide of history he covers, with a focus very much on systems and institutions, rather than the individual humans that make up those institutions.

Harari acknowledges that AI’s dangers are “not because of the malevolence of computers but because of our own shortcomings.” But he discounts the fact that we are not defined solely by our shortcomings and underestimates the human capacity to evolve. Aleksandr Solzhenitsyn, who was no stranger to systems that malevolently use networks of information, still saw the ultimate struggle as taking place within each human being: “The line separating good and evil,” he wrote, “passes not through states, nor between classes, nor between political parties either—but right through every human heart—and through all human hearts.”

So yes, AI and algorithms will certainly continue to be used to exploit the worst in us. But that same technology can also be used to strengthen what’s best in us, to nurture the better angels of our nature. Harari himself notes that “alongside greed, hubris, and cruelty, humans are also capable of love, compassion, humility, and joy.” But then why assume that AI will only be used to exploit our vices and not to fortify our virtues? After all, what’s best in us is at least as deeply imprinted and encoded as what’s worst in us. And that code is also open source for developers to build on.

Harari laments the “explicit orders from above” guiding the algorithms, but AI can allow for very different orders from above that promote benevolence and cooperation instead of division and outrage. “Institutions die without self-correcting mechanisms,” writes Harari. And the need to do the “hard and rather mundane work” of building those self-correcting mechanisms is what Harari calls the most important takeaway of the book. But it’s not just institutions that need self-correcting mechanisms. It’s humans, as well. By using AI, with its power of hyper-personalization, as a real time coach to strengthen what is best in us, we can also strengthen our individual self-correcting mechanisms and put ourselves in a better position to build those mechanisms for our institutions. “Human life is a balancing act between endeavoring to improve ourselves and accepting who we are,” he writes. AI can help us tip the balance toward the former.

Read More: How AI Can Help Humans Become More Human

Harari raises the allegory of Plato’s Cave, in which people are trapped in a cave and see only shadows on a wall, which they mistake for reality. But the technology preceding AI has already trapped us in Plato’s Cave. We’re already addicted to screens. We’re already completely polarized. The algorithms already do a great job of keeping us captive in a perpetual storm of outrage. Couldn’t AI be the technology that in fact leads us out of Plato’s Cave?

As Harari writes, “technology is rarely deterministic,” which means that, ultimately, AI will be what we make of it. “It has enormous positive potential to create the best health care systems in history, to help solve the climate crisis,” he writes, “and it can also lead to the rise of dystopian totalitarian regimes and new empires.”

Of course, there are going to be plenty of companies that continue to use algorithms to divide us and prey on our basest instincts. But we can also still create alternative models that augment our humanity. As Harari writes, “while computers are nowhere near their full potential, the same is true of humans.”

Read More: AI-Driven Behavior Change Could Transform Health Care

As it happens, it was in a conversation with Jordan Klepper on The Daily Show that Harari gave voice to the most important and hopeful summation of where we are with AI: “If for every dollar and every minute that we invest in developing artificial intelligence, we also invest in exploring and developing our our own minds, it will be okay. But if we put all our bets on technology, on AI, and neglect to develop ourselves, this is very bad news for humanity.”

Amen! When we recognize that humans are works in progress and that we are all on a journey of evolution, we can use all the tools at our disposal, including AI, to become the best versions of ourselves. This is the critical point in the nexus of humanity and technology that we find ourselves in, and the decisions we make in the coming years will determine if this will be, as Harari puts it, “a terminal error or the beginning of a hopeful new chapter in the evolution of life.”

More From TIME

[video id=EZHBTBlR autostart="viewable"]

Crypto Is Pouring Cash Into the 2024 Elections. Will It Pay Off?

8 October 2024 at 11:00

“I’m a one-issue voter, and it’s Bitcoin,” yells Jonathan Martin, a former NFL offensive lineman and current MBA student at the Wharton School, his voice rising above the pounding dance music of a Philadelphia nightclub. It’s a gray Monday evening in September, and in a couple hours, many Philadelphians will turn their attention to the Eagles game. But for now, the party is here, with hundreds of crypto acolytes packed into a venue called Vinyl, drinking beer and espresso martinis, eating cheesesteaks, and enjoying a performance by the indie pop star Lauv—all as part of a well-funded effort to make cryptocurrency a top issue in this election year. 

[time-brightcove not-tgx=”true”]

It’s an uphill battle. Crypto failed to appear in recent polls from Pew and Gallup which asked respondents to list the most important issues to Americans. A recent Federal Reserve survey found that only about 7% of Americans owned or used crypto in 2023. And 69% of Americans polled in swing states this spring still held a negative view of crypto just a couple years removed from the crash-and-burn scandal of FTX’s Sam Bankman-Fried.

But crypto bigwigs are betting that money and passion can overcome all this. So far the industry has poured $119 million into elections across the U.S. in 2024, accounting for nearly half of all corporate political contributions this cycle, according to the nonprofit Public Citizen. “Crypto has really flooded the campaign markets to defend an issue that really doesn’t have a whole lot of public appeal,” says Craig Holman, a campaign finance expert at Public Citizen. “The amount of money has gotten so outrageous.” 

Leading the charge is the cryptocurrency exchange Coinbase, which pumped $50 million into a pro-crypto super PAC called Fairshake and other related entities. In its first election cycle, Fairshake has emerged as one of the biggest super PACs in the U.S., raising more than $200 million, according to an analysis of financial disclosures by OpenSecrets. Only a pro-Trump super PAC has raised more

So far, Fairshake and its affiliated PACs have poured cash into dozens of congressional races this year, backing the winner in 36 of the first 42 it entered, from the Republican House Majority Whip Tom Emmer in Minnesota to Democratic Representative Yadira Caraveo of Colorado. The gusher of crypto cash has helped spur vague but positive statements from both major presidential candidates: Donald Trump has vowed to make America the “crypto capital of the planet,” while Kamala Harris pledged to “encourage innovative technologies like AI and digital assets.” 

It’s not clear crypto’s big political push will amount to much after the election. At the top of the industry’s policy goals is the passage of a bill known as FIT21, which would establish a framework that turns over the regulation of most digital assets to the Commodity Futures Trading Commission (CFTC), rather than the U.S. Securities and Exchange Commission (SEC), which under the Biden Administration has been led by crypto skeptic Gary Gensler. FIT21 passed the House in May, but has not received a vote in the Senate and faces an uncertain future in the next Congress. In October, a researcher for the investment bank TD Cowen wrote that they were “pessimistic” of any crypto legislation getting passed before January—and that crypto’s spending gambits in Senate races could backfire. 

To crypto backers, the deep-pocketed campaign is a necessary step to grease the industry’s relationships in Washington at a moment when the industry’s energy and key businesses are moving overseas. To crypto skeptics and campaign-finance watchdogs, it underscores the industry’s habit of making big promises about reforming broken systems while replicating many of those system’s’ tactics. Either way, their polarizing approach to this election is fitting for an industry that thrives on risky wagers. “We’ve got Democrats that are upset. We have Republicans that are upset. But I think it’s really going to come down to whether the right bets were made or not,” says Kristin Smith, the CEO of the Blockchain Association, a D.C.-based lobbying group. “It’s a high-risk, high-reward situation.”


Just two years ago, crypto’s influence in politics had become a source of shame in DC. Bankman-Fried had whizzed around town, donating over $100 million to campaigns, talking up the technology’s potential to spur financial innovation and spread prosperity. But Bankman-Fried was arrested and hit with a slew of federal criminal charges, including violating campaign finance laws by making political contributions with customer money. As FTX collapsed, Bankman-Fried was withering about his dealings with Washington: “F— regulators. They make everything worse,” he wrote to a Vox journalist.

Bankman-Fried’s campaign-finance charge was dropped due to extradition complications, but he was eventually found guilty by a New York jury on eight other charges, including fraud, and sentenced to 25 years in prison. The scandal helped tank Bitcoin’s price, erasing the gains of the pandemic-era bull run. Crypto’s legislative agenda ground to a halt. 

Industry execs thought the blowup could have positive long-term effects. “We had some measure of hope that for the terribleness of the FTX scandal and the reputational harm that it had brought to the industry writ large, it would be a catalyst to create clear federal rules,” Faryar Shirzad, chief policy officer of Coinbase, tells TIME. “But the opposite happened.” Senator Elizabeth Warren, a Massachusetts Democrat, vowed to build an “anti-crypto army.” Lawsuits filed against crypto companies by the SEC surged 183% in the six months after the FTX collapse, charging many companies with flouting securities laws. Gensler also attempted to block the creation of Bitcoin ETFs before losing that battle in court.

Read More: Inside Sam Bankman-Fried’s Attempted Conquest of Washington.

Gensler’s effort to crack down on crypto gave a beleaguered industry a target to coalesce around. Crypto fans argued that Gensler’s SEC was stifling innovation and forcing talent to move abroad. Many were particularly incensed by an SEC lawsuit against an obscure Utah crypto company called DEBT Box; a judge accused SEC lawyers of making “materially false and misleading representations”  while attempting to freeze the firm’s assets. (A spokesperson for the SEC did not respond to a request for comment for this story.) 

One of Gensler’s biggest targets was Coinbase, the leading crypto platform in the U.S. In 2023, the SEC accused Coinbase of operating an unregistered securities exchange. The move frustrated a company that promotes itself as a model of integrity in a scandal-plagued industry, especially when compared to big offshore competitors like FTX or Binance. “No company has suffered more in many ways from Mr. Gensler’s regulation by enforcement approach than Coinbase,” says Paul Grewal, chief legal officer at Coinbase. In June 2024, Coinbase sued the SEC in the hopes of gaining access to internal documents that might reveal the agency’s approach to crypto regulation.

In the meantime, crypto prices had ticked back up, with Bitcoin reaching a record high in March. And some industry leaders decided the best way to stymie Gensler’s effort would be to throw their support behind Republicans. Over the summer, venture capitalists like Marc Andreessen, Ben Horowitz, and the Winklevoss twins announced that they would support Trump. The Republican nominee—who had disparaged crypto over the years, saying that Bitcoin “seemed like a scam”—embraced the Bitcoin community’s entreaties. Trump’s vice presidential pick of J.D. Vance also heartened crypto lovers: Vance has long been a supporter of crypto on the grounds that it could help unshackle “free thinkers” from the “social justice mob.”

At the Bitcoin Conference in Nashville in July, Trump vowed to fire Gensler on “day one” of his presidency, bringing the crowd to its feet. (Gensler’s term isn’t up until 2026, and it’s unclear if Trump has the authority to fire an SEC chair.) A few months later, he and his sons announced the creation of their own cryptocurrency project, World Liberty Financial, and paid for an $1,000 bar tab with Bitcoin in New York.

Trump Bitcoin

As some crypto fans flocked to Trump, others felt throwing the industry’s full support behind the Republicans risked alienating Democrats who appeared interested in the technology’s potential. Ousting Gensler was a stopgap anyway; a successor could be equally tough on crypto. Bipartisan legislation that shifted the regulation of most assets away from the SEC became the goal.

To pass such a bill, industry strategists needed to persuade members of Congress from both parties, while ushering in a new crop of crypto-friendly politicians. Coinbase helped launch Fairshake, which was seeded by a few big donors, including Ripple and the VC firm Andreessen Horowitz. Fairshake gave cash to vocal crypto champions like New Jersey House Democrat Josh Gottheimer and North Carolina Republican Patrick McHenry, who helped steer FIT21 through the House Financial Services Committee.

Some tactics used by crypto PACs have been criticized for being opaque or counterproductive. In February, Fairshake spent $10 million on attack ads against the Senate campaign of California Democratic Representative Katie Porter. The ads did not mention crypto at all, but accused her of taking corporate money. (The Sacramento Bee assessed the claims as “mostly false.”) The campaign confused Porter, who had barely voiced any public opinions on crypto and says she’s generally receptive to its development. “Blockchain technology is important and has a lot of promise,” she says. Porter says that she didn’t hear from the industry before it decided to oppose her, and suspects the antipathy was a product of her alliance with Warren.

Read More: Could a Crypto App Save Struggling Restaurants?

Republicans, meanwhile, were incensed when a Fairshake-affliated PAC, Protect Progress, jumped into the Michigan Senate race to back Democrat Elissa Slotkin, even though her Republican opponent, Mike Rogers, has been vocal about his love for crypto. “Outside groups are trying to put Crypto in the hands of Democrats who have made it clear they will enforce heavy regulations and will be a disaster for the industry’s growth and innovation,” Rogers said in a statement to TIME.

In September, Politico reported that Fairshake’s moves were causing a “civil war” inside crypto. And in October, a researcher for TD Cowen cautioned that Fairshake’s campaign against Ohio Democrat Sherrod Brown could anger the party and lead them to delay any crypto legislation until 2026. “It’s an aggressive strategy, for sure,” says the Blockchain Association’s Kristen Smith. A representative for Fairshake did not respond to a request for comment.

The House passage of FIT21 was a bipartisan effort. While it sailed through with mostly Republican votes, powerful Democrats were part of the process; two Democrats familiar with the process say that former House Speaker Nancy Pelosi helped whip the votes. (A representative for Pelosi did not immediately respond to a request for comment). For some members, supporting FIT21 seemed like an easy tradeoff in a difficult election year. “I don’t think the constituents care,” says a Democratic policy staffer in the House whose boss voted against the bill. “The candidates in tough races are in a really tough spot, and part of their thinking is that they need the money and that crypto is not that big of a deal.”

Some industry insiders suspect that a number of lawmakers may be more interested in crypto as a source of funding than as a financial tool. In August, Chuck Schumer turned up to a crypto Zoom fundraiser for Harris, declaring his interest in passing crypto legislation by the end of the year. But a few weeks later, he released a letter outlining his legislative priorities for the fall, and crypto was not among them. Schumer’s office did not respond to a request for comment.

Crypto insiders worry that Harris may be taking a similar tack: of appeasement giving way to apathy. In a speech to New York donors in September, the Democratic presidential nominee, who had barely spoken about crypto to that point, vowed as President to “encourage innovative technologies like AI and digital assets, while protecting our consumers and investors.” Some crypto insiders hailed it as a sign that their pressure was working; others dismissed it as pre-election false flattery. 

“We’ll certainly be looking for much more than kind words, but you have to start somewhere,” Grewal says of Harris. “These steps are almost always incremental and really do represent a remarkable shift, at least in initial approach and tone, from where Gary Gensler has been for the last three years.”

Stand With Crypto

Coinbase’s other strategy to win over legislators is to rally their constituents. “When I’m talking to policymakers, they don’t care as much about what I’m saying,” Kara Calvert, Coinbase’s head of US policy, told the Philadelphia crowd in September. “They want to hear what you’re all saying.” The event was part of a swing-state bus tour organized by Stand With Crypto, which Coinbase created to mobilize “grassroots” support of crypto policy across the country. Critics accuse the coalition of being an astroturf campaign; the initial FEC report for the organization’s PAC, which endorses pro-crypto candidates, showed most of its cash coming from Coinbase executives, including Calvert and Shirzad. This year, Stand With Crypto has funded debate watch parties, an effort to urge presidential debate moderators to ask a question about crypto, and the bus tour. “There’s an active, vibrant community out there, and we want to give them tools and resources and ways that they can get more involved, raise their voices and make a real impact on shaping the conversation around crypto,” says Logan Dobson, a Republican consultant who became Stand With Crypto’s executive director this summer. 

But getting crypto enthusiasts to care about politics can be challenging. At a watch party for the September Harris-Trump debate in Virginia, a group of about 40 people showed up to eat, drink, and ostensibly watch the debate. But about an hour in, virtually no one was watching, instead chatting about blockchain. “Getting into Bitcoin has actually made me feel that politics is less important,” says Sulaman Shah, the founder of the crypto mining company Terrapin Crypto Solutions. “No matter who wins, Bitcoin mining is still going to exist.”

The crowd appeared more politically motivated in Philadelphia a few weeks later. It was the second-to-last stop of the tour, which drew sizable crowds from Phoenix to Las Vegas to Milwaukee. Former Republican Senator Pat Toomey of Pennsylvania showed up to speak at the Philadelphia event, arguing that the crowd needed to “get behind the candidates who are on the right side of this issue, and the right side of history.” The idea resonated with Martin, the former NFL offensive tackle, who called himself a “lifelong Democrat” but said he planned to vote Republican this year because of Trump’s embrace of Bitcoin. 

Industry boosters say voters like Martin could prove decisive in battleground states. A recent Harris poll found that 34% of respondents said they would consider a candidates’ crypto stance while voting. (That percentage doubled among crypto owners.) But the poll also indicated widespread skepticism of the industry persists. Just 21% of respondents felt that crypto was a good long-term investment.  

Meanwhile, critics cast the industry’s investment in elections as a well-funded, centralized effort by a supposedly decentralized industry to disempower the regulator overseeing it. “The sole reason crypto is a hot-button topic in this election cycle is that crypto businesses are spending eye-popping sums to make themselves impossible to ignore,” wrote Rick Claypool, the author of a scathing report by the nonprofit organization Public Citizen. 

Grewal of Coinbase makes no apologies for those sums. “Money in politics is a problem across industries and across issues and interest groups,” he tells TIME. “Unfortunately, that is the way that it works in American politics today, and the crypto industry is prepared to lend its voice, along with many other interest groups that are doing exactly the same thing.”

Smith, at the Blockchain Association, points out the amount that crypto companies are spending way more fighting lawsuits. “The political spend pales in comparison to what the lawyers are getting paid right now,” she says. 

Austin Campbell, a professor at Columbia Business School and the founder of a crypto consultant company, says that while Coinbase’s campaign is in part self-serving, most crypto folks are grateful for their support and think they’re doing important work. “Coinbase is fighting an existential battle to even survive, because if we don’t get regulatory clarity in the United States within the next four years, they won’t be able to expand meaningfully internationally,” he says. “In general, the things Coinbase has supported benefit people beyond just Coinbase: it is thought of as an honest merchant.”

Coinbase has invested enormous amounts in convincing Washington as much. At a recent Stand With Crypto event in Washington, Coinbase CEO Brian Armstrong took the stage alongside Democratic congressman Wiley Nickel to address a crowded room that included crypto entrepreneurs in t-shirts, Republican staffers in suits, and music fans simply there to see the Chainsmokers. “We’re kind of the belle of the ball, the hot topic on everybody’s lips,” Armstrong crowed. “They want to know, ‘Is the crypto voter real? Are we going to turn out in November?’” In the back, most of the liquored-up crowd chattered on. But up front, the true believers roared back.

Andrew R. Chow’s book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.

Elon Musk’s X Loses Court Fight With Australia, Forced to Pay Fine

4 October 2024 at 03:55
The logo of social network X and a photograph of its CEO Elon Musk, taken on Sept. 27, 2024.

An Australian judge rejected an attempt by social media platform X to wipe a A$610,500 ($418,100) fine levied by a watchdog, a notable victory in the country’s battle with global internet companies.

On Friday, the court threw out X’s petition and ordered Elon Musk’s company to pay all proceedings. That ends a lawsuit that arose after Australia’s eSafety commissioner fined the platform, saying it didn’t adequately respond to queries about efforts to crack down on child-abuse content. Under domestic law, social media companies must explain how they’re meeting basic expectations for online safety.

Read More: ‘Arrogant Billionaire’: Elon Musk Feuds With Australian PM Over Content Takedown Orders

Australia’s government has increasingly pressured global tech firms to better police content. Over the past year, it’s taken X, formerly known as Twitter, to court to attempt to remove a violent video of a terrorist attack. And it flagged it would introduce age limits for teenagers using social media.

Last month, Musk labeled the Australian government “fascists” over proposed new laws to curtail digital misinformation.

Under the proposed legislation, social media companies could be fined up to 5% of their annual revenue if they fail to take steps to “manage the risk that misinformation and disinformation on digital communications platforms poses in Australia.”

X didn’t respond to queries sent after normal business hours to its media email addresses.

Some Top AI Labs Have ‘Very Weak’ Risk Management, Study Finds

2 October 2024 at 14:05
Musk AI

Some of the world’s top AI labs suffer from inadequate safety measures—and the worst offender is Elon Musk’s xAI, according to a new study. 

The French nonprofit SaferAI released its first ratings Wednesday evaluating the risk-management practices of top AI companies. Siméon Campos, the founder of SaferAI, says the purpose of the ratings is to develop a clear standard for how AI companies are handling risk as these nascent systems grow in power and usage. AI systems have already shown their ability to anonymously hack websites or help people develop bioweapons. Governments have been slow to put frameworks in place: a California bill to regulate the AI industry there was just vetoed by Governor Gavin Newsom. 

[time-brightcove not-tgx=”true”]

“AI is extremely fast-moving technology, but AI risk management isn’t moving at the same pace,” Campos says. “Our ratings are here to fill a hole for as long as we don’t have governments who are doing assessments themselves. 

To grade each company, researchers for SaferAI assessed the “red teaming” of models—technical efforts to find flaws and vulnerabilities—as well as the companies’ strategies to model threats and mitigate risk.

Of the six companies graded, xAI ranked last, with a score of 0/5. Meta and Mistral AI were also labeled as having “very weak” risk management. OpenAI and Google Deepmind received “weak” ratings, while Anthropic led the pack with a “moderate” score of 2.2 out of 5.

Read More: Elon Musk’s AI Data Center Raises Alarms.

xAI received the lowest possible score because they have barely published anything about risk management, Campos says. He hopes the company will turn its attention to risk now that its model Grok 2 is competing with Chat-GPT and other systems. “My hope is that it’s transitory: that they will publish something in the next six months and then we can update their grade accordingly,” he says. 

Campos says the ratings might put pressure on these companies to improve their internal processes, which could potentially lessen models’ bias, curtail the spread of misinformation, or make them less prone to misuse by malicious actors. Campos also hopes these companies apply some of the same principles adopted by high-risk industries like nuclear power, biosafety, and aviation safety. “Despite these industries dealing with very different objects, they have very similar principles and risk management framework,” he says. 

SaferAI’s grading framework was designed to be compatible with some of the world’s most important AI standards, including those set forth by the EU AI Act and the G7 Hiroshima Process. SaferAI is part of the US AI Safety Consortium, which was created by the White House in February. The nonprofit is primarily funded by the tech nonprofit Founders Pledge and the investor Jaan Tallinn. 

Yoshua Bengio, one of the most respected figures in AI, endorsed the ratings system, writing in a statement that he hopes it will “guarantee the safety of the models [companies] develop and deploy…We can’t let them grade their own homework.”

Correction, Oct. 2: The original version of this story misstated how SaferAI graded the companies. Its researchers assessed the “red teaming” procedures of the models; they did not conduct their own red teaming.

Gavin Newsom Blocks Contentious AI Safety Bill in California

30 September 2024 at 12:10
Governor Gavin Newsom

California Governor Gavin Newsom has vetoed what would have become one of the most comprehensive policies governing the safety of artificial intelligence in the U.S.

The bill would’ve been among the first to hold AI developers accountable for any severe harm caused by their technologies. It drew fierce criticism from some prominent Democrats and major tech firms, including ChatGPT creator OpenAI and venture capital firm Andreessen Horowitz, who warned it could stall innovation in the state.

[time-brightcove not-tgx=”true”]

Newsom described the legislation as “well-intentioned” but said in a statement that it would’ve applied “stringent standards to even the most basic functions.” Regulation should be based on “empirical evidence and science,” he said, pointing to his own executive order on AI and other bills he’s signed that regulate the technology around known risks such as deepfakes.

The debate around California’s SB 1047 bill highlights the challenge that lawmakers around the world are facing in controlling the risks of AI while also supporting the emerging technology. U.S policymakers have yet to pass any comprehensive legislation around the technology since the release of ChatGPT two years ago touched off a global generative AI boom.

Democratic California Senator Scott Wiener, who introduced the bill, called Newsom’s veto a “setback for everyone who believes in oversight of massive corporations.” In a statement posted on X, Wiener said, “We are all less safe as a result.”

‘Reasonable care’

SB 1047 would’ve mandated that companies developing powerful AI models take reasonable care to ensure that their technologies wouldn’t cause “severe harm” such as mass casualties or property damage above $500 million. Companies would’ve had to take specific precautions, including maintaining a kill switch that could turn off their technology. AI models would’ve also been subject to third-party testing to ensure they minimized grave risk. 

The bill would’ve also created whistleblower protections for employees at AI companies that want to share safety concerns. Companies that weren’t in compliance with the bill could have been sued by the California attorney general.

Supporters of the legislation said it would’ve created common-sense legal standards. But VC investors, startup leaders and companies like OpenAI warned that it would slow innovation and drive AI companies out of the state. 

“The AI revolution is only just beginning, and California’s unique status as the global leader in AI is fueling the state’s economic dynamism,” Jason Kwon, chief strategy officer at OpenAI, wrote in a letter last month opposing the legislation. “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”

Lawmakers opposed

Lawmakers including former House Speaker Nancy Pelosi, Representative Ro Khanna and San Francisco Mayor London Breed also voiced their opposition, echoing concerns from the tech industry that the bill could impede upon California’s leadership in AI innovation. Newsom recently said he was concerned the bill might have a “chilling effect” on AI development.

The bill had earned backing from some notable names in tech late last month in the days leading up to its passage by California’s legislature. Elon Musk unexpectedly voiced his support, even though he said it’s a “tough call and will make some people upset.” OpenAI rival Anthropic, which has a reputation for being safety-oriented, said the bill’s “benefits likely outweigh its costs,” though the company said some aspects remained “concerning or ambiguous to us.” 

Wiener had defended the bill, stressing that its provisions only apply to companies that spend more than $100 million on training large models or $10 million fine-tuning models, limits that would exempt most smaller startups. The lawmaker had also noted that Congress has been historically slow to regulate tech itself.

In announcing his veto, Newsom said he will consult with outside experts, including AI scholar and entrepreneur Fei-Fei Li, to “develop workable guardrails” on the technology and continue working with state legislature on the topic. 

The governor also signed a bill on Sunday, SB 896, that regulates how state agencies use AI.

OpenAI Chief Technology Officer Mira Murati and Two Other Top Execs Leave Company

26 September 2024 at 02:00
US-TECHNOLOGY-COMPUTERS-WSJ

A high-ranking executive at OpenAI who served a few days as its interim CEO during a period of turmoil last year said she’s leaving the artificial intelligence company.

Mira Murati, OpenAI’s chief technology officer, said in a written statement Wednesday that, after much reflection, she has “made the difficult decision to leave OpenAI.”

[time-brightcove not-tgx=”true”]

“I’m stepping away because I want to create the time and space to do my own exploration,” she said.

Two other top executives are also on their way out, CEO Sam Altman announced later Wednesday. The decisions by Murati, as well as OpenAI’s Chief Research Officer Bob McGrew and another research leader, Barret Zoph, were made “independently of each other and amicably,” Altman said in a note to employees he shared on social media.

They are the latest high-profile departures from San Francisco-based OpenAI, which started as a nonprofit research laboratory and is best known for making ChatGPT. Its president and co-founder, Greg Brockman, said in August he was “taking a sabbatical” through the end of the year. Another co-founder, John Schulman, left in August for rival Anthropic, founded in 2021 by a group of ex-OpenAI leaders.

Read More: The Creator of ChatGPT Thinks AI Should Be Regulated

Yet another co-founder, Ilya Sutskever, who led a team focused on AI safety, left in May and has started his own AI company.

Days after Sutskever’s departure, his safety team co-leader Jan Leike also resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.”

Murati spoke positively of the company and Altman in a departing note to colleagues shared on social media, describing it as “at the pinnacle of AI innovation” and saying it’s hard to leave a place one cherishes.

I shared the following note with the OpenAI team today. pic.twitter.com/nsZ4khI06P

— Mira Murati (@miramurati) September 25, 2024

Altman expressed his gratitude for Murati’s service and said leadership changes are natural for a fast-growing company.

“I obviously won’t pretend it’s natural for this one to be so abrupt, but we are not a normal company,” Altman said in a post on X that also announced that six other people were taking new roles.

Murati was suddenly catapulted to be the company’s interim CEO late last year after the board of directors fired Altman, sparking upheaval in the AI industry. The company later brought in another interim CEO before restoring Altman to his leadership role and replacing most of the board members who ousted him.

OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.

How a Fake Brad Pitt Scam Resulted in Losses of Over $300,000 and Multiple Arrests

25 September 2024 at 13:47
"Wolfs" Red Carpet - The 81st Venice International Film Festival

Spanish police arrested five people for impersonating Brad Pitt in order to scam women by convincing them that the famed Hollywood actor was in love with them. The two women targeted by the online scammers lost a combined €325,000 ($364,000), Spanish media reported.

Police say that the criminals operated by visiting online platforms for fans of the actor, and built up psychological profiles of the potential victims. They chose the two women, reportedly both aged 60, because they believed they lacked romantic relationships and appeared to be in states of depression.

[time-brightcove not-tgx=”true”]

The scammers then sent WhatsApp messages and emails pretending to be Pitt (who does not have any social media presence) and promised future romantic relationships.

“My love for you is true. Feeling from my heart and forever, please forgive me and accept me … it is because I love you and am very much in love with you,” one handwritten letter that was found during a search of the criminals’ property reads, according to Times of London.

After the criminals convinced the victims of Pitt’s love, they began suggesting the women invest with him in various projects. Police have since been able to recover approximately $95,000 (€85,000) on behalf of the victims.

In a statement to the New York Times, Matthew Hiltzik— a publicist to Pitt—issued a warning about the risks of scams and reminded the public that the actor doesn’t have a social media presence.

Hiltzik is quoted as telling the publication: “It’s awful that scammers take advantage of fans’ strong connection with celebrities. But this is an important reminder to not respond to unsolicited online outreach, especially from actors who have no social media presence.”

TIME has reached out to Guardia Civil, the Spanish police agency handling the case for further comment.

If you have been scammed, you must act immediately and contact the relevant authorities to alert them of your situation. In the U.S., experts recommend contacting the Federal trade commission at the ReportFraud.ftc.gov. The main cause of action is prevention, but if you do find yourself falling victim to a scam, there are additional steps you can take.

TIME Is Looking For the World’s Top GreenTech Companies

25 September 2024 at 12:31

This year, for the first time, TIME will debut a ranking of the World’s Top GreenTech Companies, in partnership with Statista, a leading international provider of market and consumer data and rankings, alongside its second annual ranking of America’s Top GreenTech Companies. These lists will recognize the most innovative, impactful, and successful companies whose aim is to reduce human impact on the environment.

Because many companies in this space are young, TIME and Statista are accepting applications as part of the research phase. An application guarantees consideration for the lists, but does not guarantee a spot on either list, nor are the final lists limited to applicants.

To apply, click here.

More information visit: www.statista.com/page/top-greentech-companies. Winners will be announced on TIME.com and in a print issue of the magazine in March 2025.

How Digital Technology Can Help the U.N. Achieve Its 2030 Agenda

25 September 2024 at 11:45
Student working with tablet

As world leaders gather in New York City for the United Nations General Assembly, there’s a lot to get done, with just six years left to achieve the bold ambitions laid out for the world’s 2030 agenda. 

When world governments agreed to the 2030 plan back in 2015, a decade and a half seemed like plenty of time to achieve the 17 Sustainable Development Goals (SDGs) designed to create a more prosperous, safe and fair global society. While amazing progress has been made, we are in danger of falling short. I believe the U.N.’s goals can be attained through a collaborative commitment to make digital networks available to everybody in the world.

[time-brightcove not-tgx=”true”]

Mobility, broadband and the cloud are the infrastructure of 21st century life and everybody should have that opportunity. When they do, the U.N.’s goals become easier to attain because these networks help people participate in education, healthcare, financial services and markets. That helps societies grow richer, fight disease and pandemics, create cultural exchange and equality, and spread the power of computing and machine learning so that it can be used to solve environmental problems.

Right now, 2.6 billion people around the world are not online—that’s almost a third of the population. Having so many people offline creates real world problems that harm public health, social equality and economic development. More than 2 billion people around the world lack access to adequate healthcare, some of which can be delivered through telehealth with medicines delivered by online pharmacies. 1.4 billion people around the world do not have bank accounts, which can be solved by decentralized financial institutions offering mobile accounts and payment apps. A quarter billion children have no access to education in an era where the internet can bring world class instructors to students worldwide. The emergence of artificial intelligence services, which rely on networks for computational power and to reach end users, is another area where we risk leaving people behind without network access. 

Working with a group of private and public sector leaders and the World Economic Forum, the EDISON Alliance, established on the heels of the COVID-19 pandemic, seeks to address the digital divide in measurable ways. We saw an opportunity to act collectively, because the pandemic’s social distancing had created an immediate need for network services and because the launch of 5G networks greatly increased the capacity of those networks to bring service to more users and devices. In 2024, with broadband, fixed wireless access and satellite, we can reach more people than ever before. Still, we are leaving too many people out. We learned quickly that affordability, accessibility and usability were the three main obstacles to bringing people online.

In 2021, the Alliance embarked on the 1 Billion Lives Challenge, where we and our partners pledged to improve 1 billion lives around the world by delivering digital services in education, healthcare and financial services. Our goal was to achieve this by the end of 2025, but we crossed 1 billion lives this month, more than a year ahead of schedule, with Alliance members executing 320 projects across nearly 130 countries.

Everyone involved, from companies like Mastercard to governments like Bahrain and Rwanda, have committed to a multi-stakeholder approach to reaching people, where the public and private sectors work together to solve problems in economically feasible ways. Together, we have addressed diverse problems, from digital skills education to handset and device affordability.

Verizon contributed by expanding its responsible business program. Since the start of the 1 Billion Lives Challenge in 2021, our Verizon Innovative Learning programs accelerated to bring digital devices and technology training to more than 6 million students in the United States, increasing the number of students we have helped to nearly 8 million since the program started in 2012. Our partners at Apollo Hospitals brought telehealth services to inaccessible mountainous regions of India, reaching more than 30,000 patients in need. And our partners at the U.N. Development Programme worked with the government of Bangladesh to set up more than 4,500 digital centers to revolutionize the delivery of public services, empowering more than 9,000 micro-entrepreneurs along the way. EDISON has also worked with the government of Rwanda to completely digitize payments in a country where much of the population is unbanked.

As the U.N. enters the final third of its journey towards achieving its 2030 agenda, those working on each of the SDGs should consider how networks can help accelerate their progress and how digital inclusion supports these goals by giving people access to the markets and digital payment systems that support prosperity, telehealth services to improve and lengthen people’s lives, and access to educational opportunities that promote equality. At EDISON, we proved we can use these networks to positively impact lives in measurable ways. We stand ready to help the U.N. move society forward now.

❌
❌