• 5C5C5C@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Sure, but this outcome is not at all surprising. There are plenty of smart AI people that have nuanced views of what kind of threat could be posed by recklessly unleashing tools that we don’t fully understand into the hands of people who are likely to do harmful things with them.

    It’s not surprising that those valid nuanced concerns get translated into overly simplistic misrepresentations entangled with pop sci fi panic around rogue AI as they try to move into public discourse.

    • Sgagvefey@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      We do fully understand them. Not knowing the exact reason they come to a model doesn’t mean the algorithm has a shred of mystery involved. It’s like saying we don’t understand fluid dynamics because it’s computationally heavy.

      It’s autocomplete with a really big training set and a really big model. It cannot possibly develop agency. It’s hundreds of orders of magnitude of complexity short of a human.

      • 5C5C5C@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        That’s not what an algorithms researcher means when we talk about “understanding”. Obviously we know the mechanism by which it operates, it’s not an unknown alien technology that dropped into our laps.

        Understanding an algorithm means being able to predict the characteristics of its outputs based on the characteristics of its inputs. E.g. will it give an optimal solution to a problem that we pose? Will its response satisfy certain constraints or fall within certain bounds?

        Figuring this stuff out for foundation models is an active area of research, and the absence of this predictability is an enormous safety concern for any use cases where the output can be consequential.

        It cannot possibly develop agency.

        I don’t believe I’ve suggested anywhere that I think it will, but I’ll play around with this concern anyway… There’s a lot of discussion going on about having models feed back on themselves to learn from their own output. I don’t find it all that hard to imagine that something we could reasonably consider self awareness could be formed by a very complex neural network that is able to consume and process its own outputs. And once self awareness starts to form, it’s not that hard for me to imagine a sense of agency following. I have no idea what the model might use that agency for, but I don’t think it’s all that far fetched to consider the possibility of it happening.

        • Sgagvefey@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          There are plenty of nondeterministic algorithms. It’s not a special trait. There are plenty of algorithms with actual emergent behavior, which LLMs don’t have to any meaningful extent. We absolutely understand how LLMs work

          The answer to both of your questions is not some unsolved mystery. It’s “of course not”. That’s not what they do and fundamentally requires a much more complex architecture to even approach.

          • 5C5C5C@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Non-deterministic algorithms such as Monte Carlo methods or simulated annealing can still be constrained to an acceptable state space. How to do this effectively for LLMs is a very open question, largely because the state space of the problems that they are applied to is incomprehensibly huge.