shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square24fedilinkarrow-up1299arrow-down14cross-posted to: technology@lemm.ee
arrow-up1295arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 9 months agomessage-square24fedilinkcross-posted to: technology@lemm.ee
minus-squareMastengwe@lemm.eelinkfedilinkEnglisharrow-up32·9 months agoSafe AI cannot exist in the same world as hackers.
Safe AI cannot exist in the same world as hackers.