Hello everyone,
We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.
We keep working on a solution, we have a few things in the works but that won’t help us now.
Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.
Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.
But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.
Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.
what if we use deep learning based automoderators to instantly nuke these posts as they appear? For privacy and efficiency let’s make the model open source and community maintained… maybe even start a seperate donation for maintaining this, and maybe even make it a public API!
How will these models be trained? Surely, it would need huge amount of such data to be trained on.
Yeah, that can’t be a community or open source project. Maybe the result can be open source and free to implement, but even if you can artificially generate the training data, that all has to be a strictly controlled process.
maybe we can make a SFW bot an train it on general porn… most of the communities do not allow porn anyway right?
For sure let us know when it’s done
I agree But it’s also something to be careful of
It sounds like a lot of work for an uncertain outcome. Please volunteer head up the project!
Ok, how can that AI distinguish those post in a way that doesn’t have false detections? That’s the main issue with leaving the job of mods to an AI
instead of removing maybe hide the post and let a mod review it? I say let it report all pornographic post and let the mods choose what should be allowed
That’s great for detecting possible raids, but once they’re detected I see it to be an overbureaucratisation of the process
I think that it’s a wrong assumption that a human moderator would be any more reliable than a sufficiently elaborate and trained machine learning system