• mindlesscrollyparrot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    This seems to be a really long way of saying that you agree that current LLMs hallucinate all the time.

    I’m not sure that the ability to change in response to new data would necessarily be enough. They cannot form hypotheses and, even if they could, they have no way to test them.

    • UnpluggedFridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      My thesis is that we are asserting the lack of human-like qualities in AIs that we cannot define or measure. Assertions should be made on data, not uneasy feelings arising when an LLM falls into the uncanny valley.

      • mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        But we do know how they operate. I saw a post a while back where somebody asked the LLM how it was calculating (incorrectly) the date of Easter. It answered with the formula for the date of Easter. The only problem is that that was a lie. It doesn’t calculate. You or I can perform long multiplication if asked to, but the LLM can’t (ironically, since the hardware it runs on is far better at multiplication than we are).

        • UnpluggedFridge@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          We do not know how LLMs operate. Similar to our own minds, we understand some primitives, but we have no idea how certain phenomenon emerge from those primitives. Your assertion would be like saying we understand consciousness because we know the structure of a neuron.