Reading view

There are new articles available, click to refresh the page.

Meta to Use Facial Recognition to Crack Down on Scams and Recover Locked-Out Accounts

Facebook parent company Meta Platforms Inc. will start using facial recognition technology to crack down on scams that use pictures of celebrities to look more legitimate, a strategy referred to as “celeb-bait ads.”

Scammers use images of famous people to entice users into clicking on ads that lead them to shady websites, which are designed to steal their personal information or request money. Meta will start using facial recognition technology to weed out these ads by comparing the images in the post with the images from a celebrity’s Facebook or Instagram account.

[time-brightcove not-tgx=”true”]

“If we confirm a match and that the ad is a scam, we’ll block it,” Meta wrote in a blog post. Meta did not disclose how common this type of scam is across its services.

With nearly 3.3 billion daily active users across all of its apps, Meta relies on artificial intelligence to enforce many of its content rules and guidelines. That has enabled Meta to better handle the deluge of daily reports about spam and other content that breaks the rules. It has also led to problems in the past when legitimate accounts have been unintentionally suspended or blocked due to automated errors.

Read More: The Face Is the Final Frontier of Privacy

Meta says it will also start using facial recognition technology to better assist users who get locked out of their accounts. As part of a new test, some users can submit a video selfie when they’ve been locked out of their accounts. Meta will then compare the video to the photos on the account to see if there is a match. 

Meta has previously asked locked-out users to submit other forms of identity verification, like an ID card or official certificate, but says that the video selfie option would only take a minute to complete. Meta will “immediately delete any facial data generated after this comparison regardless of whether there’s a match or not,” the company wrote in a blog.

The social networking giant has a complicated history with facial recognition technology. It previously used facial recognition to identify users in uploaded photos as a way to encourage people to tag their friends and increase connections. Meta was later sued by multiple U.S. states for profiting off this technology without user consent, and in 2024 was ordered to pay the state of Texas $1.4 billion as part of the claim. Several years earlier, it agreed to pay $650 million in a separate legal suit filed in Illinois.

The company will not run this video selfie test in Illinois or Texas, according to Monika Bickert, Meta’s vice president of content policy. 

October Is Cybersecurity Awareness Month. Here’s How to Stay Safe From Scams

Financial Wellness Cybersecurity Awareness Month

NEW YORK — October is Cybersecurity Awareness Month, which means it’s the perfect time to learn how to protect yourself from scams.

“Scams have become so sophisticated now. Phishing emails, texts, spoofing caller ID, all of this technology gives scammers that edge,” said Eva Velasquez, president and CEO of the Identity Theft Resource Center.

[time-brightcove not-tgx=”true”]

As scammers find new ways to steal money and personal information, consumers should be more vigilant about who they trust, especially online. A quick way to remember what to do when you think you’re getting scammed is to think about the three S’s, said Alissa Abdullah, also known as Dr. Jay, Mastercard’s deputy chief security officer

“Stay suspicious, stop for a second (and think about it) and stay protected,” she said.

Whether it’s romance scams or job scams, impersonators are looking for ways to trick you into giving them money or sharing your personal information. Here’s what to know:

Know scammers’ tactics

Three common tactics used by scammers are based on fear, urgency and money, said security expert Petros Efstathopoulos. Here’s how they work:

— Fear

When a scammer contacts you via phone or email, they use language that makes it seem like there is a problem that you need to solve. For example, a scammer contacts you over email telling you that your tax return has an error and if you don’t fix it you’ll get in trouble.

— Urgency

Because scammers are good at creating a sense of urgency, people tend to rush, which makes them vulnerable. Scammers often tell people they need to act right away, which can lead to them sharing private information such as their Social Security numbers.

— Money

Scammers use money as bait, Efstathopoulos said. They might impersonate tax professionals or the IRS saying you will get a bigger tax refund than you expect if you pay them for their services or share your personal information.

Know the most common scams

Simply being aware of typical scams can help, experts say. Robocalls in particular frequently target vulnerable individuals like seniors, people with disabilities, and people with debt.

“If you get a robocall out of the blue paying a recorded message trying to get you to buy something, just hang up,” said James Lee, chief operating officer at the Identity Theft Resource Center. “Same goes for texts — anytime you get them from a number you don’t know asking you to pay, wire, or click on something suspicious.”

Lee urges consumers to hang up and call the company or institution in question at an official number.

Scammers will also often imitate someone in authority, such as a tax or debt collector. They might pretend to be a loved one calling to request immediate financial assistance for bail, legal help, or a hospital bill.

Romance scams

So-called “romance scams” often target lonely and isolated individuals, according to Will Maxson, assistant director of the Division of Marketing Practices at the FTC. These scams can take place over longer periods of time — even years.

Kate Kleinart, 70, who lost tens of thousands to a romance scam over several months, said to be vigilant if a new Facebook friend is exceptionally good-looking, asks you to download WhatsApp to communicate, attempts to isolate you from friends and family, and/or gets romantic very quickly.

“If you’re seeing that picture of a very handsome person, ask someone younger in your life — a child, a grandchild, a niece or a nephew — to help you reverse-image search or identify the photo,” she said.

She said the man in pictures she received was a plastic surgeon from Spain whose photos have been stolen and used by scammers.

Kleinart had also been living under lockdown during the early pandemic when she got the initial friend request, and the companionship and communication meant a lot to her while she was cut off from family. When the scam fell apart, she missed the relationship even more than the savings.

“Losing the love was worse than losing the money,” she said.

Job scams

Job scams involve a person pretending to be a recruiter or a company in order to steal money or information from a job seeker.

Scammers tend to use the name of an employee from a large company and craft a job posting that matches similar positions. An initial red flag is that scammers usually try to make the job very appealing, Velasquez said.

“They’re going to have very high salaries for somewhat low-skilled work,” she said. “And they’re often saying it’s a 100% remote position because that’s so appealing to people.”

Some scammers post fake jobs, but others reach out directly to job seekers through direct messages or texts. If the scammers are looking to steal your personal information, they may ask you to fill out several forms that include information like your Social Security number and driver’s license details.

The only information a legitimate employer should ask for at the beginning of the process is your skills, your work experience, and your contact information, Velasquez said.

Other details don’t generally need to be shared with an employer until after you’ve gotten an offer.

Investment scams

According to Lois Greisman, an associate director of marketing practices at the Federal Trade Commission, an investment scam constitutes any get-rich-quick scheme that lures targets via social media accounts or online ads.

Investment scammers typically add different forms of “testimony,” such as from other social media accounts, to support that the “investment” works. Many of these also involve cryptocurrency. To avoid falling for these frauds, the FTC recommends independently researching the company — especially by searching the company’s name along with terms like “review” or “scam.”

Quiz scams

When you’re using Facebook or scrolling Google results, be aware of quiz scams, which typically appear innocuous and ask about topics you might be interested in, such as your car or favorite TV show. They may also ask you to take a personality test.

Despite these benign-seeming questions, scammers can then use the personal information you share to respond to security questions from your accounts or hack your social media to send malware links to your contacts.

To protect your personal information, the FTC simply recommends steering clear of online quizzes. The commission also advises consumers to use random answers for security questions.

“Asked to enter your mother’s maiden name? Say it’s something else: Parmesan or another word you’ll remember,” advises Terri Miller, consumer education specialist at the FTC. “This way, scammers won’t be able to use information they find to steal your identity.”

Marketplace scams

When buying or selling products on Instagram or Facebook Marketplace, keep in mind that not everyone that reaches out to you has the best intentions.

To avoid being scammed when selling via an online platform, the FTC recommends checking buyers’ profiles, not sharing any codes sent to your phone or email, and avoiding accepting online payments from unknown persons.

Likewise, when buying something from an online marketplace, make sure to diligently research the seller. Take a look at whether the profile is verified, what kind of reviews they have, and the terms and conditions of the purchase.

Don’t pick up if you don’t know who is calling

Scammers often reach out by phone, Ben Hoffman, Head of Strategy and Consumer Products at Fifth Third Bank recommends that you don’t pick up unknown incoming calls.

“Banks don’t ask your for your password,” said Hoffman. If you believe your bank is trying to reach out, give them a call at a number listed on their website.

This makes it easier to know for sure that you’re not talking to a scammer. As a general rule, banks don’t often call unless there is suspicious activity on your account or if you previously contacted them about a problem.

If you receive many unknown calls that end up being scammers or robocalls, you can use tools available on your phone to block spam. Check here for how to do this on your iPhone and here for Android.

Use all of the technology at your disposal

There are many tools are your disposal that can be used to protect yourself from scammers online.

— Use a password manager to ensure you’re utilizing a complex password that scammers can’t guess.

— Regularly checking your credit report and bank statements is a good practice since it can help you identify if someone has been using your bank account without your knowledge.

— Turn on multi-factor verification to make sure impersonators aren’t able to access your social media or bank accounts.

When in doubt, call for help

As scams get more sophisticated, it’s difficult to know who to trust or if a person is actually real, or an impersonator. If you aren’t sure if a job recruiter is real or if your bank is actually asking your for information, find organizations that can help you, recommended Velasquez.

Organizations like the Identity Theft Protection Center and the AARP Fraud Watch Network offer free services for customers who need help identifying scams or knowing what to do if you’ve been a victim of a scam.

Share what you know with loved ones

If you’ve taken all the necessary steps to protect yourself, you might want to help those around you. Whether you’re helping your grandparents to block unknown callers on their phones or sharing tips with your neighbors, talking with others about how to protect themselves from scams can be very effective.

Report the scam

If you or a family member is a victim of a scam, it’s good practice to report it on the FTC’s website.

Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

3D Render of Futuristic  AI Data Icon Glass Cubes. AI Innovation and Cloud Technology

Artificial General Intelligence—machines that can learn and perform any cognitive task that a human can—has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it’s an impending reality that demands our immediate attention.

On Sept. 17, during a Senate Judiciary Subcommittee hearing titled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified that, “The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.” She continued that leading AI companies such as OpenAI, Google, and Anthropic are “treating building AGI as an entirely serious goal.”

[time-brightcove not-tgx=”true”]

Toner’s co-witness William Saunders—a former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsibly—echoed similar sentiments to Toner, testifying that, “Companies like OpenAI are working towards building artificial general intelligence” and that “they are raising billions of dollars towards this goal.”

Read More: When Might AI Outsmart Us? It Depends Who You Ask

All three leading AI labs—OpenAI, Anthropic, and Google DeepMind—are more or less explicit about their AGI goals. OpenAI’s mission states: “To ensure that artificial general intelligence—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Anthropic focuses on “building reliable, interpretable, and steerable AI systems,” aiming for “safe AGI.” Google DeepMind aspires “to solve intelligence” and then to use the resultant AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “human-level AI will be passed in the mid-2020s.” New entrants into the AI race, such as Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI.

Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month’s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are “folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don’t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.”

Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. “The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now—one to three years has been the latest prediction,” he said. He didn’t mince words about where responsibility lies: “What we should learn from social media, that experience is, don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI’s imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed “within the next 5 years.” Some 82% of respondents also said we should “go slowly and deliberately” in AI development.

That’s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of “novel biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario AGI “could lead to literal human extinction.”

Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

Read More: What an American Approach to AI Regulation Should Look Like

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

No one knows how long we have until AGI—what Senator Blumenthal referred to as “the 64 billion dollar question”—but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years.

Ignoring the potentially imminent challenges of AGI won’t make them disappear. It’s time for policymakers to begin to get their heads out of the cloud.

TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a ‘Relic of the Past’

On Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIME’s 2023 and 2024 100 Influential People in AI lists—all of whom are playing a role in shaping the future of the technology.

[time-brightcove not-tgx=”true”]

Following a discussion between TIME’s CEO Jessica Sibley and executives from the event’s sponsors—Rosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidia’s VP of Europe, the Middle East, and Africa—and after the main course had been served, attention turned to a panel discussion.

The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AI’s impacts, and the potential of AI-generated videos to transform how we communicate.

The panelists’ views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to “get our heads around the empirical data that will tell us much more about what’s coming down the pike and what kind of risks are associated with it.”

Birhane, meanwhile, emphasized what she sees as the “massive hype” around AI’s capabilities and potential to pose existential risk. “These models don’t actually live up to their claims.” Birhane argued that “AI is not just computational calculations. It’s the entire pipeline that makes it possible to build and to sustain systems,” citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. “There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline,” she said. Riparbelli suggested that both “fixing the problems already in society today” and thinking about “Terminator-style scenarios” are important, and worth paying attention to.

Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, “best practices for how you deal with different safety challenges change very quickly,” Leung said, pointing to a “big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.”

The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. “There’s a huge onus on companies to continue to innovate on safety practices,” said Leung. Riparbelli agreed, suggesting companies may have a “moral imperative” to ensure their systems are safe. At the same time, “governments have to play a role here. That’s completely non-negotiable,” said Leung.

Equally, Birhane was clear that “effective regulation” based on “empirical evidence” is necessary. “A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain,” she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. “Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.”

When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. “What are the best practices, standards, and protocols that we want to harmonize across jurisdictions?” she asked. “It’s not a sufficiently-resourced question.” Still, Leung pointed to the fact that China was party to last year’s AI Safety Summit hosted by the U.K. as cause for optimism. “It’s very important to make sure that they’re around the table,” she said. 

One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his company’s technology, Riparbelli’s AI double declared “text as a technology is ultimately transitory and will become a relic of the past.” Expanding on the thought, the real Riparbelli said: “We’ve always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.” 

He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will “enable anyone to create a Hollywood film from their bedroom without needing more than their imagination,” he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that “every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.”

Riparbelli suggests that rather than a “technology-centric” approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. “Let’s focus on the things we don’t want to happen and regulate around those.”

The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.

❌