Since Saturday, I’ve been struggling to get Facebook’s Trust & Safety (T&S) team to get a number of scam accounts shut down for a friend. She’s older, not quite so tech-savvy, and rarely uses Facebook anymore. I suspect that she was targeted by scammers largely because she’s older, somewhat well-known (she runs a charity in the UK, and is well-connected in the entertainment industry), and has a substantial friend network. Those factors make her an attractive target for attackers.
Now, Facebook has never been great at T&S, but I remember from my days at the company that the T&S / Content Moderation infrastructure was … decent? Even in 2018, the teams had some reasonably robust automation and flagging capabilities, supplemented by an army of people. Scam accounts and impersonations were often taken down promptly after a few reports were made. But this experience signals to me that whatever they had in 2018 has largely gone away and been replaced by … nothing?
It Shouldn’t Be This Hard
We discovered that my friend was the victim of a scam account when her brother notified her of a strange message he received, purporting to be from her, and asking for him to click on a dodgy link. I told her I’d look into it, as I remembered that Facebook has a ‘Find Support & Report’ flag. This is (one) of the 4 accounts pretending to be her and sending scam messages. Notice the creation date, and the fact that there’s very little in the way of profile details. The other accounts largely looked the same.
The ‘Find Support or Report’ workflow is easy enough. I reported it (using my mostly empty Facebook account), had her report it, and provided instructions for her friends to do the same. She had confirmation that others who received the scam message also reported the account. At this point, it should have been straightforward — a new profile, flagged for review by multiple accounts including the person these accounts are pretending to be, sending obvious scam messages. This should have triggered an action. It’s been 4 days, and absolutely nothing has happened. The accounts are still there, and the most I received was a notice that Facebook had closed my report with no action.
After days of nothing, I reached out to friends across other social networks looking for an inside source who might be able to escalate this internally. I have quite a few former Facebook colleagues, and I figured that one of them might know someone who’s still on the inside. I’m fortunate that this option was available to me — it’s certainly not an option for 99.5% of Facebook users, including my friend. I had some gracious volunteers who kindly have tried to assist. And yet, it’s been 4 days, and absolutely nothing has happened. The accounts are still there.
And of course, my impersonated friend is not alone — in May 2023, Lloyds Banking Group reported that someone in the UK falls victim to scams across Facebook and Instagram every seven minutes, which costs consumers more than £27m a year. Over on Twitter, a good colleague familiar with Facebook/Meta’s slowness suggested that T&S personnel may have been intentionally instructed not to respond to scam reports — he observed that “the sheer scale of rip-offs is growing faster than T&S can scale.” That’s probably rumor, but it also feels intuitively true.
If Facebook Can’t Get It Right, Threads is Doomed
Threads isn’t Facebook — technically, it’s integrated with Instagram, but as I’ve observed, Instagram T&S isn’t much better. And Threads has achieved a record number of users (allegedly over 100 million in a week, though how many of them are still active is up for debate). Still, my completely unscientific Google Trends study suggests a reasonably consistent trend in the number of people searching for ‘scam’ in relation to IG & FB:
Rates of account impersonation searches also seem to experience high points, especially on Instagram in the late spring/early summer months.
This signals a problem to me: If Facebook and Instagram can’t get it right, even for the low-hanging scams, I’m not sure there’s a lot of hope for anything new out of Meta. Even pre-Musk Twitter handled T&S (especially impersonation) relatively well. Accounts would be flagged and then yeeted off the network usually within a day or two. And while it’s all but certain that this has degraded under Emperor Paypalpatine, I see a worrying trend as we watch new Twitter replacements (including Bluesky and to a lesser extent, Mastodon) continue to treat trust & safety issues like an afterthought.
No Solutions Here
Unfortunately, I don’t think there’s an easy solution here.
Like most things in tech, content moderation and trust & safety are wicked, fractally complex problems. It would be nice if someone could get it right for a change though. It would be good if automated systems did work more efficiently, instead of having to backstop to humans every time. But that of course, creates its own pathways for abuse (brigading, false positives). So instead, we’re stuck — flailing around as users, trying to console our friends and loved ones who are victims of scams, impersonation, and abuse, impotently trying to fight the problem against faceless corporations and billionaires who can spend tens of billions on pet projects (like the Metaverse, buying Twitter or going to Mars), but can’t invest even a fraction in good infra and support.
It’s frustrating that I can’t help my friend. I suspect it’s also really frustrating for the people at Meta (or Twitter, or Bluesky) who want to help but know they’re not empowered to meaningfully do so. Is this a money problem? Can enough resources be thrown to address scams and abuse (through tech, people, policy, and process) or are we humans just generally so shitty that there’s no meaningful way to whack these moles. I don’t know.
Still, I do feel bad that I can’t do anything more for my friend. If you’ve got suggestions on something I might have missed (or have a direct line to Mark Z), feel free to share in the comments, or at [email protected].