Every political thread is chock full of people being angry and unreasonable. I did some data mining, and most of the hate is coming from a very small percentage of the community, and the rest of the community is very consistent in downvoting them.

The problem is that even with human moderators enforcing a series of rules, most of those people are still in the comments making things miserable. So I made a bot to do it instead.

!santabot@slrpnk.net is a bot that uses an algorithm similar to PageRank to analyze the Lemmy community, and preemptively bans about 1-2% of posters, that consistently get a negative reaction a lot of the time. Take a look at an example of the early results. See how nice that is? It’s just people talking, and when they disagree, they say things like “clearly that part is wrong” and “your additions are good information though.”

It’s too early to tell how well it will work on a larger scale, but I’m hopeful. So, welcome to my experiment. Let’s talk politics without all the abusive people coming into the picture too. Please come in and test if this thing can work in the long run.

Pleasant Politics

!pleasantpolitics@slrpnk.net

  • archomrade [he/him]@midwest.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    4 months ago

    I know this will ring hollow, considering I am (predictably) on the autoban list, but:

    I don’t know how this isn’t a political-echochamber speedrun any%. People downvote posts and comments for a lot of reasons, and a big one (maybe the biggest one in a political community) is general disagreement/dislike, even simply extreme abstract mistrust. This is basically just crowdsourced vibes-based moderation.

    Then again, I think communities are allowed to moderate/gatekeep their own spaces however the like. I see little difference between this practice and .ml or lemmygrad preemptively banning users based on comments made on other communities. In fact, I expect the same bot deployed on .ml or hexbear would end up banning the most impassioned centrist users from .world and kbin, and it would result in an accelerated silo-ing of the fediverse if it were applied at scale. Each community has a type of user they find the most disagreeable, and the more this automod is allowed to run the more each space will end up being defined by that perceived opposition.

    Little doubt I would find the consensus-view unpalatable in a space like that, so no skin off my nose.

    • auk@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      I looked at the bot’s judgements about your user. The issue isn’t your politics. Anti-center or anti-Western politics are the majority view on Lemmy, and your posts about your political views get ranked positively. The problem is that somehow you wind up in long heated arguments with “centrists” which wander away from the topic and get personal, where you double down on bad behavior because you say that’s the tactic you want to employ to get your point across. That’s the content that’s getting ranked negatively, and often enough to overcome the weight of the positive content.

      If Lemmy split into a silo that was the 98.6% of users that didn’t do that, and a silo of 1.4% of users that wanted to do that, I would be okay with that outcome. I completely agree with your concern in the abstract, but that’s not what’s happening here.

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        The problem is that somehow you wind up in long heated arguments with “centrists” which wander away from the topic and get personal

        I’m not surprised I was identified by the bot, but it’s worth pointing out that ending up in heated arguments happens because people disagree. Those things are related. If someone is getting into lots of lengthy disagreements that are largely positive but devolve into the unwanted behavior, doesn’t that at least give legitimacy to the concern that dissenting opinions are being penalized simply because they attract a lot of impassioned disagreement? Even if both participants in that disagreement are penalized, that just means any disagreement that may already be present isn’t given opportunity to play out. Your community would just be lots of people politely agreeing not to disagree.

        I have no problem with wanting to build a community around a particular set of acceptable behaviors -I don’t even take issue with trying to quantify that behavior and automating it. But we shouldn’t pretend as if doing so doesn’t have unintended polarizing consequences.

        A community that allows for disagreement but limits argumentation isn’t neutral - it gives preferences to status-quo and consensus positions by limiting the types of dissent allowed. If users aren’t able to resolve conflicting perspectives through argumentation, then the consensus view ends up being left uncontested (at least not meaningfully). That isn’t a problem if the intent of the community is to enforce decorum so that contentious argumentation happens elsewhere, but if a majority of communities utilizes a similar moderation policy then of course it is going to result in siloing.

        I might also point out that an argument that is drawn out over dozens of comments and ends in that ‘unwanted’ behavior you’re looking for isn’t all that visible to most users; if you’re someone who is trying to avoid ‘jerks’ then I would think the relative nested position/visibility of that activity should be important. I’m not sure how your bot weighs activity against that visibility, but I think even that doubt that brings into question the effectiveness of this as a strategy.

        Again, not challenging the specific moderation choices the bot has made, just pointing out the problem of employing this type of moderation on a large scale. As it has been employed in this particular community is interesting.

        • auk@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Do you mind if I give some examples? What you’re saying is valid in the abstract, but I think pointing out concrete examples of what the bot is reacting to will shed some light on what I’m talking about.

          • archomrade [he/him]@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 months ago

            You’re free to provide examples, but like I said it’s not the specific moderation choices that are the problem, it’s using public sentiment as a core part of that determination.

            • auk@slrpnk.netOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              Here are examples of things you got positive rank for, politics and argumentation:

              Here are examples of things you got negative rank for, not directly political interpersonal squabbling:

              Maybe this is harsh, but I think this is a good decision by the bot. The first list is fine. Most of your political views are far from unpopular on Lemmy. The thing is that you post a lot more of the squabbling content than the political content. You said you’re being unpleasant on purpose, don’t plan to stop, and that people should probably block you. I feel okay about excluding that from this community.

              If in the future you change your mind about how you want to converse, you can send a comment or DM. We can talk about it, make sure you’re not being targeted unfairly, but in the meantime this is completely fair.

                • auk@slrpnk.netOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 months ago

                  I made this system because I, also, was concerned about the macro social implications.

                  Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good. Furthermore, when certain things get too unpleasant to deal with on any level anymore, big instances will defederate from each other completely. The macro social implications of that on the community are exactly why I want to try a different model, because that one doesn’t seem very good.

                  You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern. Your concern is noted. If you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

                  • Madison420@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    4 months ago

                    You’ve created the lizard lounge from reddit dude, you’re basically limiting a sub to power users and saying it’s a good thing. It’s not.

                  • archomrade [he/him]@midwest.social
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    4 months ago

                    Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good.

                    You don’t need a social credit tracking system to auto-ban users if there’s a big majority of the community that recognizes the user as problematic: you could manually ban them, or use a ban voting system, or use the bot to flag users that are potentially problematic to assist on manual-ban determinations, or hand out automated warnings… Especially if you’re only looking at 1-2% of problematic users, is that really so many that you can’t review them independently?

                    Users behave differently in different communities… Preemptively banning someone for activity in another community is already problematic because it assumes they’d behave in the same way in the other, but now it’s for activity that is ill-defined and aggregated over many hundreds or thousands of comments. There’s a reason why each community has their rules clearly spelled out in the side, it’s because they each have different expectations and users need those expectations spelled out if they have any chance of following them.

                    I’m sure your ranking system is genius and perfectly tuned to the type of user you find the most problematic - your data analysis genius is noted. The problem with automated ranking systems isn’t that they’re bad at what they claim to be doing, it’s that they’re undemocratic and dehumanizing and provide little recourse for error, and when applied at large scales those problems become amplified and systemic.

                    You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern.

                    That isn’t my concern with your implementation, it’s that it limits the ability to defend opposing views when they occur. Consensus views don’t need to be defended against aggressive opposition, because they’re already presumed to be true; a dissenting view will nearly always be met with hostile opposition (especially when it regards a charged political topic), and by penalizing defenses of those positions you allow consensus views remain unopposed. I don’t particularly care to defend my own record, but since you provided them it’s worth pointing out that all of the penalized examples you listed of my user were in response to hostile opposition and character accusations. The positively ranked comments were within the consensus view (like you said), so of course they rank positively. I’m also tickled that one of them was a comment critiquing exactly the kind of arbitrary moderation policies like the one you’re defending now.

                    f you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

                    Even if I wasn’t on the ban list and could see it I wouldn’t have any interest in critiquing its ban choices because that isn’t the problem I have with it.