After several highly publicised cases of people falling for scams on Meta properties like Facebook and Instagram; on Tuesday the company unveiled technological anti-fraud measures being tested this year.
In a refreshingly candid and fully on-the-record presentation to journalists, Meta said it’s testing facial recognition technology (FRT) to shield people from the all-too frequent “celeb-bait” ads in which people purporting to be prime minister Christopher Luxon and other well-known people hawk various scams like crypto currency investments.
It’s important to remember many people suffer heavy losses in the scams, as reported in media. If they are persuaded into giving consent to scammers, victims are in a relatively weak position and very likely to find it almost impossible to get reimbursed. Particularly so if the payments have been made in Bitcoin and other crypto currencies, because the transactions are not reversible due to the nature of the technology.
Here’s what Meta says it will do to stymie the celeb-bait scammers:
“Now, we’re testing a new way of detecting celeb-bait scams. If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures. If we confirm a match and that the ad is a scam, we’ll block it. “
Meta hopes FRT will out scams with deepfake “celebrities” endorsing dodgy investments better than what its current automated systems using machine learning are capable of.
Election and public opinion influencing operations which are rife on Meta properties could be harder to mitigate against with FRT, and they were not covered by the announcement.
The FRT takes place in real-time and Meta claims it’s more accurate than a manual human review. It builds upon Facebook’s earlier work with FRT, which was pulled in 2021 due to “complex social issues’.
Meta’s director of security policy and former United States National Security Council director David Agranovich explained that celebrities will receive in-app notifications telling them they’re enrolled in an experiment (that they can opt out of). The testing is set to start in December in earnest, with a cohort of celebrities likely to be targeted in the scam ads. For New Zealand, that might include Luxon, Clarke Gayford, Trevor Mallard, Jacinda Ardern and Hilary Barry to name a few.
Around 50,000 users will form the first cohort in the FRT trial, Agranovich said.
A corollary to the anti-celeb-bait use of FRT is that Facebook is looking at the tech for account recovery. Like it or not, losing control over your Facebook account if it’s compromised can actually be devastating. Scammers can use compromised accounts to defraud your real-life friends, ruining your reputation and potentially exposing you to legal repercussions in the process.
With FRT, Meta is testing video selfies as a way for people to verify their identities so they can regain access to compromised accounts. Meta reckons verification with video selfies will be harder for hackers to abuse than the current upload an ID document to prove you’re the legitimate account holder method.
Meta is also looking at better detecting fake materials and goods advertised on Facebook, Agranovich said.
“We don’t want any of that revenue,” Agranovich said but declined to say if the money the social network brings in through fraudulent ads that evade detection could be used to compensate scam victims.
Is this all safe?
Nothing is likely to be perfectly safe, ever, but listening to Agranovich, Meta appears to have taken very reasonable steps to ensure that FRT isn’t going to be abused. Meta/Facebook promise they won’t use FRT for any other than the stated purposes, and it’s one-time only. The data is encrypted, and deleted after use.
Meta is also working with regulators, experts and policy makers to ensure robust privacy and safeguards around the use of FRT.
Will Meta’s FRT and other anti-scam measures work?
“Scammers are relentless and continuously evolve their tactics to evade detection,” Meta said.
The company spends US$5 billion a year on trying to stay ahead of scammers and fraudsters, and to build new technology and capabilities to keep people safe on its properties.
Chances are FRT will make a dent in the number of scams people encounter on Meta’s network through ads, as the tech targets fake endorsements purportedly by well-known people who are seen as trusted. This is a good thing, as it’s just plain wrong to put celebrities in that position of being abused and exploited for fraud.
Meta is however grappling with a gigantic problem that technology alone can’t solve.
The social networks that Meta operate are designed to have very low barriers of entry to attract the maximum number of participants possible. Everyone’s welcome and encouraged to not leave the social platforms, and that fundamental principle means scammers continue to be served up a massive audience of victims on an Internet scale. That’s a never-ending stream of people who scammers can conveniently exploit using technology that automates very realistic appearing fraud.
Going after the miscreants is difficult, in particular scammers who are incentivised to continue their profitable operations. They are highly motivated to just keep throwing things at the wall in the hopes that things get through and some of them do.
“I would say that scammers are by far both the fastest to iterate and also perhaps the most resilient in the face of enforcement,” Agranovich said.
“They don’t care about geopolitical consequences of being caught; they often operate out of jurisdictions with relatively lax law enforcement services,” he added.
“So the tools that we oftentimes use to push back at a societal level against influence operations or hackers don’t always apply in the scammers’ context,” Agranovich said.
“There isn’t a single silver bullet that will delete scammers from the Internet,” Agranovich said.
That is correct, but it’s also not an argument Meta can rely on if the technology solutions aren’t effective.
This post was originally published on here