Kid@sh.itjust.worksM to Cybersecurity@sh.itjust.worksEnglish · 20 days agoResearchers Reveal 'Deceptive Delight' Method to Jailbreak AI Modelsthehackernews.comexternal-linkmessage-square5fedilinkarrow-up131arrow-down12cross-posted to: infosec_news@infosec.pub
arrow-up129arrow-down1external-linkResearchers Reveal 'Deceptive Delight' Method to Jailbreak AI Modelsthehackernews.comKid@sh.itjust.worksM to Cybersecurity@sh.itjust.worksEnglish · 20 days agomessage-square5fedilinkcross-posted to: infosec_news@infosec.pub
minus-squareremi_pan@sh.itjust.workslinkfedilinkEnglisharrow-up5·20 days agoIf the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.
If the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.