Roost Must Prove Itself Good. That's a Reasonable Request
posted on in: ai, tech, social media, media and roost.
~2,121 words, about a 11 min read.

A new non-profit launched by a coalition of tech, public policy and media entities has entered the ring. Roost announced its launch on BlueSky, along with slick press releases in over a dozen languages. Their aim? "Free, open-source safety tools."
The Backlash
My feed instantly split between people being like 'uhh this is good, we need machine learning, which is not AI, to do better moderation', many of whom managed to construct straw men out of objections that painted people as just too uninformed about how moderation worked. Not all defenses were so smarmy or bad faith, but too many were.
On the other side were people like the highly respected Timnit Gebru who summed up the concerns aptly with "Satire is dead. Truly."
I can't help feeling like I'm stuck in a flashback to the unmoderated tech boosterism of earlier media cycles with a lot of the responses. Roost, putting aside what appears to be a quite expensive purchase of a whole tech press cycle, is notable in its founding partners: Omidyar's AI Collaborative, Discord, Google, Institute of Global Politics, Knight Foundation, Open AI, Patrick Mcgovern Foundation, Roblox, and the Special Competitive Studies Project.
Other partners include Hugging Face, Microsoft, Match Group and GitHub (wait... isn't that just Microsoft again?).
There are other partners in there who fall into the unalloyed positive side of the tech world, CDT, Mozilla and Wikimedia for sure. But it is hard to not be focused on the big names.
Bluesky's Technical Advisor @why summarized the reaction to the blowback on Roost:
I love people getting mad about this. It's like they don't realize this website (and every online social space with more than like ten people) would not be possible to moderate without heavy use of AI tools.
The Biased Players
Allow me to explain what's going on. People are getting mad because a lot of the companies that are involved are actively opposed to good moderation. Google, OpenAI, Match, Roblox and Discord have all suffered press cycles in the last few years about their willingness to avoid moderation in exchange for profit.
Google has a whole project of its own, one that notably didn't require the funding of non-profits like Knight or Mozilla, called Jigsaw which has been running since 2016. Part of its mission was specifically moderation. In fact, Jigsaw was in the news recently when they fired somewhere around half of the employees because moderation and safety tools weren't profitable.
One might ask, if Google thought this was important why not... keep running Jigsaw at full staff? Or do any moderation of the output of its AI, which was famously reported on for telling people to eat glue last year.
If you know anything about children's video game Roblox then you might know it is well known for the accusations of exploiting children aimed at it by games journalists 3 years ago. It addressed those concerns by attempting to suppress that reporting. It was the subject of a class action lawsuit opened last year that alleged the platform was "built on the exploitation of children". Its primary defense appears to be 'child labor is good actually'. There's also its child predator problem.
OpenAI has massive environmental, ethical, business and leadership issues which are so extensive I'm not going into it here, but it is indubitably the beachhead of a massive social and labor project to alter the fabric of our world in ways we have not agreed to and it has not been particularly willing to explain. OpenAI has not proven particularly open to moderating its own platform, or working in open source. Notably, it already has its own moderation tools it is selling.
Discord has repeatedly gotten flack for hosting groups that use it as a jumping off point for harassment. It has attempted to make itself entirely non-responsible for what happens on its platform, and only hired trust and safety staff after it was used as the platform for the lethal Unite the Right rally.
Project Liberty and the AI Collaborative both, in my experiences of them thus far seem disconnected from broad open-community efforts around building solutions and trustworthy AI. They feel like rich mens' playthings in an era when I am very tired of rich men. I'm willing to be proven or told I'm wrong on both fronts, but they've both yet to prove themselves as effective organizations in my personal opinion.
Special Competitive Studies Project, which I encountered here for the first time, appears to be a front for a lobby effort towards less AI regulation. Not a great sign.
I do love the Knight Foundation, but sadly its inclusion in any projects of this type is not particularly meaningful to me. While they have great people and a great mission I have, in the past, found that they are always ready to have their metaphorical chain yanked by tech companies with deep pockets.
These, and others in the partner category, are concerning participants. The tech companies are not exactly exemplars of good faith engagement on the subject of online safety or moderation. The smell here is bad.
The full court push of PR with the launch feels a lot like a reputation washing exercise; a rare bunch of great press and an immediate credulity for these companies.
For the long term, this seems an equally likely motivation. It wouldn't be out of character for OpenAI, Discord, Google or Roblox to use a project like this as a shield against liability and an excuse for failure they could absolutely afford to fix in-house.
The Right Problem?
There are other reasons people are reacting negatively. The implied position of the Roost project is that large community platforms should exist and be moderated algorithmically. I don't think that's necessarily what we should be aiming for in general. Big platforms can have successful moderation actions, but the past year has proven that even where such actions are technically possible we can't rely on big tech to defend us where it counts.
It is reasonable to ask if this is the right approach at all. Should we be building tools to try and moderate at huge scale, even though moderating at scale has generally proven to be an impossible task to get fully right, and the training of models is expensive and bad for the climate? Hasn't the last decade proven to us that moderating at massive scale isn't just a technical problem but a market capture one? Once we have big scale and standard tools these platforms no longer are reliably on the side of the people subject to them. The incentives for both the users and the owners no longer align with good moderation.
I suspect that others, like myself, can't help but imagine a better use for the money standing up Roost. Especially when money for media and community is tighter than it has ever been and getting more sparse with each executive order.
Would it be a better use of this money to support and fund smaller online communities? Ones who might not even need these types of tools?
Perhaps it might be a better use of this money to run a non-profit employing actual human moderators that can be used by qualified projects? Moderators who can have good training, fair pay, and hours that don't burn them out?
Why is the technological solution here assumed to be the best one? The many press releases don't really answer that, nor does the site.
The Past Haunts Us
There's also the inescapable feeling that this is just one more big tech project to directly (like Google) or indirectly (like OpenAI) rob media companies of even more revenue by giving them open-source tools that require paying for infrastructure which one of these companies will run for them.
It is exhausting to see this pattern repeat itself over and over while they invest a sub-fraction of profits into reputation-washing that claims to help the news media but just kicks it after they were the ones to push it down.
This's why I worry about Knight's involvement specifically, it feels like a signal that the target audience for this effort is mostly media companies that can't afford anymore "help" from big tech.
Now what?
There are some good organizations involved here. Rescue Lab, Wikimedia and CDT are all organizations that I have rarely, if ever, had reason to criticize. As Erin points out the leadership involved seems to be good people with solid backgrounds in doing the right thing, effectively, in the right way.
I don't think it is impossible for Roost to produce good work or effective tools, even if it seems unlikely to be the best use of funds. Google and OpenAI at this point basically have unlimited money compared to the budget of Roost thus far; so why not beg a fraction of a fraction of their profits for this work?
I plan to keep an eye on Roost and encourage others to do the same. It may yet prove itself.
We shouldn't reward them ahead of time
I don't think it is wild however to take on this announcement from a negative stance. The vibes are bad and people are extremely legitimate in looking at this project and reacting negatively.
I had sort of thought we were past knee-jerk defenses of big tech projects at this point, but apparently not. They don't deserve your time to defend them. They don't deserve your assumption of good faith. Don't give it to them.
As for the people who are taking this from the perspective of 'AI bad'... well they're sort of right too. When big tech is talking about AI, they're usually not talking about good products. Certainly the version of machine learning already at use for moderation is a very different animal than OpenAI's project. It is cheaper, less resource intensive, and more focused in its purpose and design. The last two years however have shown that companies, and especially OpenAI and Google, are very willing to use the wrong types of Machine Learning for the right job. OpenAI and its boosters are regularly employing its more expensive and less specialized Generative AI towards tasks that we previously had used specialized Machine Learning tools much more effectively.
OpenAIs involvement in particular does not provoke confidence that limited specialized machine learning is what will be employed here. People whose reaction is to criticize its use of AI because OpenAI projects are usually extremely worthy of criticism have a fair likelihood of being proven right.
If you find yourself online defending a Google/OpenAI/Discord/Roblox-funded project of any type I'd take a step back. Don't attack people for having a very reasonable reaction to those companies and any project they label AI.
Also: Don't Harass or Mock People
As a PS, I want to note that I'm seeing a lot of unfortunate 'savvy' reactions about Roost. There's a lot of 'these people don't understand what ML is' and a lot of 'don't you know how moderation works' going around and uhhhh
don't.
Don't do that.
If you want to explain the differences or talk about how moderation via ML works, then do so. But if your reaction boils down to 'I'm smarter than you so listen to me dummy' that's bad. You're doing a bad thing. These people have legitimate concerns and 'you just don't understand moderation machine learning' isn't helping to address them.
On the other front, while you absolutely should mock large tech companies and call them out, don't harass individual people over conversations about moderation.
I am Ready to be Impressed
I'm not here to lock off Roost or its contributors. I'm interested in seeing what they produce and taking on the actual software and what it does.
They are, however, starting in the negative. This isn't a good looking launch, these aren't good partners, and the pitch is--to be blunt--rancid. That's where they are starting from as far as I'm concerned. Show me that there's good here and I'll gladly celebrate it, but I'm not giving Roost any benefit of the doubt.