• xptiger@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    not yet right now,

    hilarious by those outputs.

    But still, sooner maybe, as long as AI keeps learning and improving and advancing, it may come true sadly.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      There are certain ways in which the AI seems to plateau short of the desired usefulness in some ways. If the output is supposed to be verbatim specific to the point a machine could consume it, it tends to have these sorts of mistakes. It shines at doing a better job in ingesting direction and data that is ‘messy’ and processing it and spitting it back at a human for a human to similarly have to be capable of correcting ‘messy’ input, though with it doing a lot of the heavy lifting to be influenced by more data.

      This is a huge chunk of huge utility, but for very specific things and things that really require expertise, there seems to be an enduring need for human review. However, the volume of people needed may be dramatically reduced.

      Similar to other areas of computing. Back in the 1930s, an animated film might take 1500 to make it happen. Now fewer than a tenth of that could easily turn out better quality with all the computer assistance. We’ve grown our ambitions and to make a similarly ‘blockbuster’ scale production which is insanely more detailed and thoroughly animated we still are talking about less than half the staff.

      It seems that AI will be another step in that direction of reducing required staff to acheive better than current expectations, but not entirely displacing them. Becomes extra challenging, since the impacts may be felt across so many industries all at once, with no clear area to soak up the potential reduced labor demand.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I think LLMs will be fundamentally unable to do certain things because of how they function. They’re only as good as what they’re trained on, and with how fast and loose companies have been with that, they’ll have formed patterns based on incorrect information.

      Fundamentally, it doesn’t know what it generates. If I ask for a citation, it’ll likely give me a result that seems legit, but doesn’t actually exist.