We have been informed of another potential CSAM attack to our federated instance lemmy.ml.
After the events of the last time, I have preemptively and temporarily defederated us from lemmy.ml until the situation can be assessed with more clarity.
I have already deleted the suspicious posts (without looking at them myself, all from the database’s command line) and banned the author. To the best of our knowledge, at no point in time any CSAM content was saved on our server.
EDIT: 2023-09-03 8:40 UTC
There have been no further reports of similar problems arising from lemmy.ml or other instances, so I am re enabling federation. Thank you for your patience.
If only there way to have MS,AWS or Google’s vision AI scan all the images and automatically remove them when it’s determined to be inappropriate.
Those tools are targeted towards large customers and you need a special relationship to get access.
They have a free tier that does 5000 images a month.
As Atalocke said, there’s a Cloduflare tool that automatically scans every image on your site and blocks them if it finds any flagged content. That would solve most of our issues. Access to it requires approval from the US agency that handles this type of problems, though, so it might take a while.
deleted by creator