• conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 days ago

    My theory about what happened next — which is supported by conversations I’ve had with researchers in artificial intelligence, some of whom worked on Bing — is that many of the stories about my experience with Sydney were scraped from the web and fed into other A.I. systems.

    These systems, then, learned to associate my name with the demise of a prominent chatbot. In other words, they saw me as a threat.

    🤦‍♀️

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      20 days ago

      I’m tired of people ascribing any sort of intelligence to AI. It’s not thinking, it’s not seeing you as a threat, it’s just predicting a probable response based on its training data.

  • Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    20 days ago

    This guy is a moron.

    If the bots are saying they hate him and that he sucks, it’s because that’s what the general consensus was from all the data they scrapped not because the bot is scared of him as an AI killer.

    • silence7@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 days ago

      The bots are not reliable summarizes like that. They often can’t tell the difference between the author and the subject of a piece of writing.

  • BelatedPeacock@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    20 days ago

    My theory about what happened next — which is supported by conversations I’ve had with researchers in artificial intelligence, some of whom worked on Bing — is that many of the stories about my experience with Sydney were scraped from the web and fed into other A.I. systems.

    These systems, then, learned to associate my name with the demise of a prominent chatbot. In other words, they saw me as a threat.

    LLMs predict text, they don’t have feelings or awareness. Even if a researcher did say that I call to attention the Google chatbot programmer who thought an LLM became sentient because it said so when generating text.

    Guys, my paper is sentient, it says so.

    If the AI says he’s disonhest and sensational that’s because enough people on the internet have said so that the AI considers it to be true.