remi_pan@sh.itjust.workstoCybersecurity@sh.itjust.works•Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI ModelsEnglish
5·
17 days agoIf the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.
My precious bodily fluids ! https://m.youtube.com/watch?v=xQyf3QgRP-c