In the last few days we’ve had an outburst of painful, intelligent, useful conversation about racism and abuse in the world of Mastodon and the Fediverse. I certainly learned things I hadn’t known, and I’m going to walk you through the recent drama and toss in ideas on how to improve safety.

For me, the story started back in early 2023 when Timnit Gebru (the person fired by Google for questioning the LLM-is-great orthodoxy, co-author of Stochastic Parrots) shouted loudly and eloquently that her arrival on Mastodon was greeted by a volley of racist abuse. This shocked a lot of hyperoverprivileged people like me who don’t experience that stuff. As the months went on after that, my perception was that the Mastodon community had pulled up its moderation socks and things were getting better.

July 2024 · Then, just this week, Kim Crayton issued a passionate invitation to the “White Dudes for Kamala Harris” event, followed immediately by examples of the racist trolling that she saw in response. With a content warning that this is not pretty stuff, here are two of her posts: 1, 2.

Let me quote Ms Crayton:

The racist attacks you’ve witnessed directed at me since Friday, particularly by instances with 1 or 2 individuals, SHOULD cause you to ask “why?” and here’s the part “good white folx” often miss…these attacks are about YOU…these attacks are INTENDED to keep me from putting a mirror in your faces and showing you that YOU TOO are harmed by white supremacy and anti-Blackness…these attacks are no different than banning books…they’re INTENDED to keep you IGNORANT about the fact that you’re COMPLICIT

She quite appropriately shouted at the community generally and the Mastodon developers specifically. Her voice was reinforced by many others, some of whom sharpened the criticism by calling the Mastodon team whiteness-afflicted at best and racist at worst.

People asked a lot of questions and we learned a few things. First of all, It turns out that some attackers came from instances that are known to be toxic and should long-since have been defederated by Ms Crayton’s. Defederation is the Fediverse’s nuclear weapon, our best tool for keeping even the sloppiest admins on their toes. To the extent our tools work at all, they’re useless if they’re not applied.

But on the other hand it’s cheap and fast to spin up a single-user Mastodon instance that won’t get defederated until the slime-thrower has thrown slime.

Invisibility · What I’ve only now come to understand is that Mastodon helps griefers hide. Suppose you’re on instance A and looking at a post from instance B, which has a comment from an account on instance C. Whether or not you can see that comment… is complicated. But lots of times, you can’t. Let me excerpt a couple of remarks from someone who wishes to remain anonymous.

Thinking about how mastodon works in the context of all the poc i follow who complain constantly about racist harassment and how often i look at their mentions and how I’ve literally never seen an example of the abuse they’re experiencing despite actively looking for it.

It must be maddening to have lots of people saying horrible things to you while nobody who’d be willing to defend you can see anyone doing anything to you.

But also it really does breed suspicion in allies. I believe it when people say they’re being harassed, but when I’m looking for evidence of it on two separate instances and not ever seeing it? I have to step hard on the part of me that’s like … really?

Take-away 1 · This is a problem that the Masto/Fedi community can’t ignore. We can honestly say that up till now, we didn’t realize how serious it was. Now we know.

Take-away 2 · Let’s try to cut the Mastodon developers some slack. Here’s a quote from one, in a private chat:

I must admit that my mentions today are making me rethink my involvement in Mastodon

I am burning myself out for this project for a long time, not getting much in return, and now I am a racist because I dont fix racism.

I think it is entirely reasonable to disagree with the team, which is tiny and underfunded, on their development priorities. Especially after these last few days, it looks like a lot of people — me, for sure — failed to dive deep into the narrated experience of racist abuse. In the team’s defense, they’re getting yelled at all the time by many people, all of whom have strong opinions about their feature that needs to ship right now!

Conversations · One of the Black Fedi voices that most influences me is Mekka Okereke, who weighed in intelligently, from which this, on the subject of Ms Crayton:

  • She should not have to experience this

  • It should be easier for admins at DAIR, and across the whole Fediverse, to prevent this

Mekka has set up a meeting with the Mastodon team and says Ms Crayton will be coming along. I hope that turns out to be useful.

More good input · Let’s start with Marco Rogers, also known as @polotek@social.polotek.net. I followed Marco for ages on Twitter, not always agreeing with his strong opinions on Web/Cloud technology, but always enjoying them. He’s been on Mastodon in recent months and, as usual, offers long-form opinions that are worth reading.

He waded into the furore around our abuse problem, starting here, from which a few highlights.

I see a lot of the drama that is happening between people of color on the platform and the mastodon dev team. I feel like I need to help.

If people of color still find ourselves dependent on a small team of white devs to get what we want, that is a failure of the principles of the fediverse.

I want to know how I can find and support people that are aligned with my values. I want to enable those people to work on a platform that I can use. And we don't need permission from the mastodon team to do so. They're not in charge.

Mekka, previously mentioned, re-entered the fray:

If you run a Mastodon instance, and you don't block at least the minimum list of known terrible instances, and you have Black users, it's just a matter of time before your users face a hate brigade.

That's the only reason these awful instances exist. That's all they do.

Telling users "Just move to a better server!" is supremely unhelpful. It doesn't help the mods, and it doesn't help the users.

It needs to be easier. It's currently too hard to block them and keep up with the new ones.

And more; this is from Jerry Bell, one of the longest-lasting Fediverse builders (and I think the only person I’m quoting here who doesn’t present as Black). These are short excerpts from a long and excellent piece.

I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows.

… About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance.

What next? · I don’t know. But I feel a buzz of energy, and smart people getting their teeth into the meat of the problem.

Now I have thoughts to offer about moving forward.

Who are the enemy? · They fall into two baskets: Professional and amateur. I think the current Mastodon attackers are mostly amateurs. These are lonely Nazis, incels, channers, your basic scummy online assholes. Their organization is loose at best (“He’s pointing at her, so I will too”), and they’re typically not well-funded nor are they deep technical experts.

Then there are the pros, people doing this as their day job. I suspect most of those are working for nation states, and yes, we all know which nation states those probably are. They have sophisticated automation to help them launch armies of bots.

Here are some suggestions about potential fight-backs, mostly aimed at amateurs.

Countermeasure: Money · There’s this nonprofit called IFTAS which is working on tools and support structures for moderation. How about they start offering a curated allowlist of servers that it’s safe to federate with? How do you get on that list? Pay $50 to IFTAS, which will add you to the watchlist, and also to a service scanning your members’ posts for abusive stuff during your first month or so of operation.

Cue the howls of outrage saying “Many oppressed people can’t afford $50, you’re discriminating against the victims!” I suppose, but they can still get online at any of the (many) free-to-use instances. I think it’s totally reasonable to throw a $50 roadblock in the process of setting up a server.

In this world, what happens? Joe Incel sets up an instance at ownthelibs.nazi or wherever, pays his $50, and starts throwing slime. This gets reported and pretty soon, he’s defederated. Sure, he can do it again. But how many times is this basement-dweller willing to spend $50, leaving a paper trail each time just in case he says something that’s illegal to say in the jurisdiction where he lives? Not that many, I think?

Countermeasure: Steal from Pleroma · It turns out Mastodon isn’t the only Fediverse software. One of the competitors is Pleroma. Unfortunately, it seems to be the server of choice for our attackers, because it’s easy and cheap to set up. Having said that, its moderation facilities are generally regarded as superior to Mastodon’s, notably a subsystem called Message Rewrite Facility (MRF) which I haven’t been near but is frequently brought up as something that would be useful.

Countermeasure: Make reporting better · I report abusive posts sometimes, and, as a moderator for CoSocial.ca I see reports too. I think the “Report post” interface on many clients is weak, asking you unnecessary questions.

And when I get a report, it seems like half the time none of the abusive material is attached, and it takes me multiple clicks to look at the reported account’s feed, which feels like a pretty essential step.

Here’s how I’d like reporting to work.

  1. There’s a single button labeled “Report this post”. When you click it, a popup says “Reported, thanks” and you’re done. Maybe it could query whether you want to block the user or instance, but it’s super important that the process be lightweight.

  2. The software should pull together a report package including the reported post’s text and graphics. (Not just the URLs, because the attackers like to cover their tracks.) Also the attacker’s profile page. No report should ever be filed without evidence.

Countermeasure: Rules of thumb · Lauren offered this: Suppose a reply or mention comes in for someone on Instance A from someone on Instance B. Suppose Instance A could check whether anyone else on A follows anyone on B. If not, reject the incoming message. This would have to be a per-user not global setting, and I see it as a placeholder for a whole class of heuristics that could usefully get in the attackers’ way.

Wish us luck · Obviously I’m not claiming that any of these ideas are the magic bullet that’s going to slay the online-abuse monster. But we do need ideas to work with, because it’s not a monster that we can afford to ignore.

I care intensely about this, because I think decentralization is an essential ingredient of online conversation, and online conversation is valuable, and if we can’t make it safe we won’t have it.



Contributions

Comment feed for ongoing:Comments feed

From: Matěj Cepl (Aug 01 2024, at 23:08)

> How about they start offering a curated whitelist of servers that it’s safe to federate with?

This sounds awfully like Spamhaus, which is now more a conspiracy protecting oligopoly in email service against independent providers than anything else. Your idea is even worse, because now the independent host owner now would have to pay for the privilege of being unblacklisted. “It would be unfortunate if you have happened to be blacklisted again, you should really rather purchase our Prime Membership before it happens.”

[link]

From: Nik Clayton (Aug 02 2024, at 06:29)

I'm going to start fixing this on the client side.

https://pachli.app/pachli/2024/08/02/harassment-controls.html

[link]

From: Jarek (Aug 02 2024, at 09:41)

Is there any future in which your suggested implementation of "Countermeasure: Money" doesn't boil down to also not-federating with each instance that has open sign-up? That's not necessarily a bad thing, but let's be upfront about it. I don't see a way into that future that doesn't involve manual approval of all free accounts to prevent drive-by griefing, and an allowlist of known good/trusted instances. But then, if you've created a system of allowlisting instances that manually approve sign-ups and do good moderation, do you really need the instance to pay $50 to be listed?

[link]

From: Fazal Majid (Aug 02 2024, at 10:09)

I feel for the Mastodon devs, but we've known since at least 1992 that any new social platform has to have robust protections from spam and abuse. The dramatic explosion of open racism and its enablers like Trump was less easy to predict, but fundamentally it;s the same problem as spam. Just because doing it in a distributed environment is hard doesn't give you a pass on dealing with the problem as a priority one in your design.

Most people don't run their instance, and as the Fediverse grows more popular the cost of hosting an instance will only grow. So people will use public instances, whether free or paid in some form. Today the responsibilities of hosting your instance and managing your blocklist are conflated, but that need not be the case.

I can easily imagine a rating service that rates individual users, or entire instances if too high a proportion of users on an instance is abusive. But moderation doesn't scale (except if LLMs actually start delivering on their promises) so this needs to be either a paid service or a cooperative crowdsourced effort of like-minded people.

Say a user is blocked if at reported at least 2 or 3 times by separate members of the reputation service. Some paid appeals procedure needs to be available, to prevent censorship because the first thing abusers will do is try DARVO to silence their victims. When a user is blocked, their posts or comments become invisible, along with the transitive closure of replies or comments the the posts (not sure if ActivityPub even has that level of metadata).

[link]

From: Len (Aug 08 2024, at 18:04)

The technology approach reeks of an arms race. I wonder if this problem is rooted in the culture itself.

You can’t cure it because you are it?

Just a thought from the heart of darkness, aka, the swamp.

[link]

author · Dad
colophon · rights
picture of the day
July 30, 2024
· The World (151 fragments)
· · Social Media (14 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!