Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Michal@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    AGI could be possible if a new breakthrough is made. Currently LLMs are just pretty good text predictor, and any intelligence exhibited by them is because they are trained on texts exhibiting intelligence (written by humans) . Make a large enough model, and it will seem like an intelligent being.

    • lunarul@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Make a large enough model, and it will seem like an intelligent being.

      That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.

      And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?

      But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.