The closer I look, the more depressed I get.

First of all, the entire thing feels off. Quoting one commenter:

So this seems to be some kind of universal package manager where most of the content is AI generated and it’s all tied into some kind of reverse bug bounty thing thing that also has crypto built in for some reason? I feel like we need a new OSS license that excludes stuff like this. Imagine AI-generated curl | bash installers 🤮

The bug bounty thing in question apparently being tea.xyz. From what I can tell, the only things actually being AI-generated are descriptions and logos for packages as an experimental web frontend for the registry, not package contents nor build/distribution instructions (thank god).

Apparently pkgx (the package manager in question) is being built by the person who created brew. I leave it up to the reader’s sensibilities to decide whether this is a good or bad omen for the project itself.

Now we get to the actual sneer-worthy content (in my view): the comments given by a certain user for whom it seems PKGX is the best thing since sliced bread, and that any criticism of using AI for the project’s hosted content is just and who thinks we should all change our preferences and habits to accommodate this

PKGX didn’t (and still doesn’t) have a description and icon/logo field. However, from beginning (since when it was tea), it had a large number of packages (more than 1200 now). So, it would have been hard to write descriptions and add images to every single package. There’s more than just adding packages to the pantry. PKGX Pantry is, unlike most registries, a fully-automated one. But upstreams often change their build methods, or do things that break packaging. So, some areas like a webpage for all packages get left out (it was added a lot later). Now, it needed images and descriptions. Updating descriptions and images for every single package wouldn’t be that good. So, AI-based image and description generation might be the easiest and probably also the best for everyone approach. Additionally, the hardwork of developers working on this project and every Open-Source project should be appreciated.

I got whiplash from the speed at which they pivot from arguing “it would have been hard for a human to write all these descriptions” to “the hardwork of developers working on this projet […] should be appreciated”. So it’s “hard” work that justifies letting people deal with spicy autocomplete in the product itself, but less hard than copying the descriptions that many of these projects make publicly available regardless??? Not to mention the packaged software probably has some descriptions that took time and effort to make, that this thing just disregards in favor of having Stochastic Polly guess what flavor of cracker it’s about to feed you.

When others push back against AI-anything being so heavily involved in this package registry project, we get the next pearl of wisdom (emphasis mine):

But personally I think, a combination of both AI and human would be the best. Instead of AI directly writing, we can maybe make it do PR (for which, we’ll need to add a description field). The PR can be reviewed. And if it’s not correct, can also be corrected. That’s just my opinion.

Surely the task of reviewing something written by an AI that can’t be blindly trusted, a task that basically requires you to know what said AI is “supposed” to write in the first place to be able to trust its outpu, is bound to always be simpler and result in better work than if you sat down and wrote the thing yourself.

Icing on the cake, the displayed profile name for the above comment’s author is rustdevbtw. Truly hitting as many of the “tech shitshow” bingo squares as we can! (no shade intended towards rust itself, I really like the language, I just thinking playing into cliques like this is not great).

My original post title was going to be something a bit more sensational like “Bored of dealing with actual human package maintainers? Want to get in on that AI craze? Use an LLM to generate descriptions for curl-piped-to-bash installations scraped from the web!” but in doing my due diligence I see the actual repo owner/maintainer shows up and is infinitely more reassuring with their comments, and imo shows a good level of responsibility in cleaning up the mess that spawned from this comments section on that github issue.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    33
    ·
    7 months ago

    Surely the task of reviewing something written by an AI that can’t be blindly trusted, a task that basically requires you to know what said AI is “supposed” to write in the first place to be able to trust its outpu, is bound to always be simpler and result in better work than if you sat down and wrote the thing yourself.

    This is only semi-related but.

    When I quit Microsoft last year they were heavily pushing AI into everything. At some point they added an automated ChatGPT nonsense “summary” to every PR you opened. First it’d edit the description to add its own take on the contents, and then it’d add a review comment.

    Anyone who had to deal with PR review knows it can be frustrating. This made it so that right of the bat you would have to deal with a lengthy, completely nonsensical review that missed the point of the code, asked for terrible “improvements”, or straight up proposed incorrect code.

    In effect it made the process much more frustrating and time-consuming. The same workload was there, plus you had to read an equivalent of a 16-year-old who thinks he knows how software works explain your work to you badly. And since it’s a bona fide review comment, you have to address it and close it. Absolutely fucking kafkaesque.

    Forcing humans to read and cleanup AI regurgitated nonsense should be a felony.

    • F4GRX Sébastien@chaos.social
      link
      fedilink
      arrow-up
      12
      ·
      7 months ago

      @V0ldek @Jayjader omg I so wholeheartedly agree. Why do we have to review shit we could write in the first place?

      The same goes for music and visual arts.

      Some people say that it will be *more productive* to have art created by generative models, and then a human can fix it. But why? Why leave crap jobs to humans and have the good work done by a crap machine? This is complete nonsense!

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      7 months ago

      Microsoft is trying so fucking hard to push AI that it’s become pathetic at this point.

      For some reason I still get those stupid GitHub emails even though I think I’ve unsubscribed from them ten times already, and it’s always the same crap about “93% of developers agree that AI is making them 67% more efficient”, or “86% of code is now written with the help of Copilot”, or “420% of our users generate sexy catgirl porn on company time”. And there’s never a source for these numbers.

      Meanwhile, even the most AI-hyped marketing bozos in our company agree that Copilot is fucking useless. Even asking it so sum of some Excel rows takes more time than just writing the stupid macro yourself, and you have to double-check the macro it writes anyway.

      This really has shitty Kickstarter vibes, where they make up even more insane crap every update just to get people to pledge more while they’re burning through their funds without delivering anything they promised.

      It annoyed me enough to move my own repos all over to Codeberg, so I guess that’s something at least.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        At least 1% of their users generate sexy catboy porn on company time. Source: I am that user.

      • Neo-Luddite Gregly@retro.pizza
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        @mii Look, if big corporations want to pay everyone six figures to generate sexy catgirl (or catboy) porn on company time, I’m not gonna stop them. /s

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 months ago

      When I quit Microsoft last year they were heavily pushing AI into everything. At some point they added an automated ChatGPT nonsense “summary” to every PR you opened. First it’d edit the description to add its own take on the contents, and then it’d add a review comment.

      dear fuck I’m glad I dropped GitHub for anything important, cause this is almost certainly internal dogfooding for the next godawful shit they’re adding to it

        • @self you’d be surprised how often being 100% honest and unreserved about your opinion on processes that are complete bullshit actually works in your favour. especially if you don’t filter it through some veneer of corporate waffle. in the worst case they make it clear that they’re not going to fix it, so you quit and save your sanity. but I’ve had quite a bit of success with just saying “this is utterly miserable to deal with, why are we subjecting everyone to this?”

          • @self it certainly doesn’t work everywhere but for me it’s a good yardstick of how good the cultural fit is. if I can’t get people to talk to me like a human instead of blathering on like they’re dictating a linkedin post then I’m out the door.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 months ago

            Depends on the dev team culture tbh

            Ime US teams are far more likely in general to take feedback like that negatively (sometimes even personally). Seems to be because there’s some part of identity / self-value tied up in what was already created. The “this is miserable and we’re gonna $x” is usually addressed by setting x to “throw it out and replace it” than to “we’re going to prioritise trying to fix this”

            • David Gerard@awful.systemsM
              link
              fedilink
              English
              arrow-up
              13
              ·
              edit-2
              7 months ago

              yeah, speaking British in a US workplace may be hazardous (an accent only hacks 'merkin brains so far)

              (I would mention speaking 'Strayan but even Australians know better than to use that in Australian workplaces, the adjective “cunting” is apparently not considered professional [by the people it totally applies to])

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                10
                ·
                7 months ago

                The boot never likes hearing anything but “argggggh” or “ow that hurts”. Really weird, no-one has been able to figure it out!

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        Not sure which community this fits. Is it a tech take? I guess it’s largely Satya’s fault…

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 months ago

          I’d say TechTakes is the best fit since Microsoft has always come off as less motivated by Rationalist ideals and more by fuck-you-got-mine opportunist objectivism

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        7 months ago

        Microsoft does dogfood all the things actually, which I think is one of the good aspects of the company.

        We used the experimental versions of Teams, all Azure changes were first deployed to the part of the cloud used by MSFT, etc. Even new C#/.NET versions are first ran through internal projects.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        Microsoft is legendary for this. In fact, I’ll give you Microsoft’s entire business recipe; it’s not secret:

        • Dogfood all products
        • Maintain backwards compatibility at all costs
        • Have at least a decade’s worth of liquid operating funds in the bank at all times
    • @V0ldek @Jayjader At some point, I worked on a real estate website. We wanted to add little pages describing all the various neighborhoods in major cities, and contracted actual humans to do so, people who lived in said neighborhoods.

      Reviewing their work, rewriting parts of it, and fixing mistakes was excruciating, I can’t even imagine trying to do that with the insipid writings of an AI that doesn’t understand the context or the purpose of what is asked of it…

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      Holy shit that sounds infuriating

      It’s already immensely frustrating how many of these things operate in such a completely non-dwim way (especially when they’re interjected in workflows that don’t need them), but overtly forcing modifications like that into one’s work is invasive and abusive as hell

      I’ve been meaning to write about this for a while, think that’ll be in this weeks todo list…

    • Jon A. Cruz@mstdn.social
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      @V0ldek @Jayjader jeez. Reminds me of what they did with their whole TDD “no, don’t do it! Oh ok
      , now call whatever you did do ‘TDD’ even though we didn’t allow you to do TDD” thing.

    • stony kark@mastodon.world
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      @V0ldek @Jayjader damn I’d resolve that conversation so fast the bot wouldn’t know what to think. If I wanted someone to propose nonsensical changes I’d ask someone not involved in the project to review it?