Why do game makers need to be the responsible party? I’ve never played a game that didn’t let you block and/or mute people you’re playing with. That doesn’t make assholes disappear but it stops the problem from impacting you. Why add a middleman to the equation?
Because the devs/mods have the power to at least attempt to remove the person from the game before anyone else has to suffer their comments.
It’s much simpler to let players decide what they will tolerate on their own.
It’s pretty simple to enable mod actions, too. Game devs make a list of rules about what you can and can’t say. You agree to those rules when you start playing the game. Breaking the rules earns you a punishment. If you don’t like it, you don’t play the game. If the rules are unfairly restrictive then people won’t play the game and it will fail. This is how internet moderation has worked since forever.
Yes that is how moderation has worked in some places in the past. It’s also been historically unpaid volunteer work and not particularly effective, especially at large scales. Most of the people here have at least one story about bad moderation on reddit precisely because that kind of moderation is inefficient and heavily influenced by the personal bias of the moderator reviewing a report. You still needed to block people on a regular basis if you wanted to both participate and avoid harassment from a subset of users. That’s how it is all over the internet and there is nothing that can be done to completely remove that element of online activity. Hence the need for thicker skin.
Well yeah, that’s why part of Riot’s solution seems to be adding more mods. I’d be more understanding if Riot didn’t have the resources to add more paid mod support, but I truly don’t think that’s the case. So yeah, pay more mods and use more advanced technology to flag communication, I think that’s an attainable goal.
I’m not saying that people shouldn’t still protect themselves by blocking harassment, but I believe it’s perfectly within devs’ abilities to at least attempt to remove the most heinous bullies from the game.
While that is true in many respects, voice chat is quite difficult to police compared to text chat. I’m not sure how you go about automating or even monitoring that without recording everything people say using your service. Which then brings up a whole host of issues from data storage costs to privacy concerns to consent to record laws. You pretty much have to rely on users to submit evidence of their claims and that leads us back to the idea that users need to expect to have an active role in enforcing any sort of moderation policy.
It doesn’t bring up any issues to record people for moderation purposes, if it’s in the Terms of Service of whatever service/game you’re using. Agreeing to the ToS is a form of contract. CoD’s voice chat, for example, is already monitored and recorded.
Also, as voice recognition with AI is getting better, so will the effectiveness of those moderation tools. Not just in terms of speed but also in terms of cost.
Because the devs/mods have the power to at least attempt to remove the person from the game before anyone else has to suffer their comments.
It’s pretty simple to enable mod actions, too. Game devs make a list of rules about what you can and can’t say. You agree to those rules when you start playing the game. Breaking the rules earns you a punishment. If you don’t like it, you don’t play the game. If the rules are unfairly restrictive then people won’t play the game and it will fail. This is how internet moderation has worked since forever.
Yes that is how moderation has worked in some places in the past. It’s also been historically unpaid volunteer work and not particularly effective, especially at large scales. Most of the people here have at least one story about bad moderation on reddit precisely because that kind of moderation is inefficient and heavily influenced by the personal bias of the moderator reviewing a report. You still needed to block people on a regular basis if you wanted to both participate and avoid harassment from a subset of users. That’s how it is all over the internet and there is nothing that can be done to completely remove that element of online activity. Hence the need for thicker skin.
Well yeah, that’s why part of Riot’s solution seems to be adding more mods. I’d be more understanding if Riot didn’t have the resources to add more paid mod support, but I truly don’t think that’s the case. So yeah, pay more mods and use more advanced technology to flag communication, I think that’s an attainable goal.
I’m not saying that people shouldn’t still protect themselves by blocking harassment, but I believe it’s perfectly within devs’ abilities to at least attempt to remove the most heinous bullies from the game.
While that is true in many respects, voice chat is quite difficult to police compared to text chat. I’m not sure how you go about automating or even monitoring that without recording everything people say using your service. Which then brings up a whole host of issues from data storage costs to privacy concerns to consent to record laws. You pretty much have to rely on users to submit evidence of their claims and that leads us back to the idea that users need to expect to have an active role in enforcing any sort of moderation policy.
It doesn’t bring up any issues to record people for moderation purposes, if it’s in the Terms of Service of whatever service/game you’re using. Agreeing to the ToS is a form of contract. CoD’s voice chat, for example, is already monitored and recorded.
Also, as voice recognition with AI is getting better, so will the effectiveness of those moderation tools. Not just in terms of speed but also in terms of cost.