cross-posted from: https://lemmy.world/post/2312869

AI researchers say they’ve found ‘virtually unlimited’ ways to bypass Bard and ChatGPT’s safety rules::The researchers found they could use jailbreaks they’d developed for open-source systems to target mainstream and closed AI systems.

  • kromem@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    These kinds of attacks are trivially preventable, it just requires making requests 2-3x as expensive, and literally no one cares enough about jailbreaking to do that other than the media acting like jailbreaking is such an issue.

    If you use a Nike shoe to smack yourself in the head, yes, that could be pretty surprising and upsetting compared to the intended uses. But Nike isn’t exactly going to charge their entire userbase more in order to safety-proof the product from you smashing it into your face.

    The jailbreaking issue is only going to matter when you have shared persistence resulting from requests, and at that point in time, you’ll simply see a secondary ‘firewall’ LLM discriminator explicitly checking request and response for rule-breaking content or jailbreaking attempts before writing to a persistent layer.

    As long as responses are only user-specific, this is going to remain a non-issue with unusually excessive news coverage as it’s headline grabbing and not as nuanced as real issues like biases or hallucinations.

    • LoafyLemon@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Not really. This isn’t AGI but a text transformer. They trained it so the most probable answer to unwanted questions is ‘I’m sorry but as an AI…’.

      However, if you phrase your question in a way researchers haven’t thought about, you will bypass the filter.

      There’s not an ounce of intelligence in LLMs, it’s all statistics.