• atopi@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 hours ago

      if the knife is a possessed weapon whispering to the holder, trying to convince them to use it for murder, blaming it may be appropriate

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      37
      ·
      1 day ago

      If someone manufactured a knife that told schizophrenics they’re being followed and people are trying to kill them, then yeah. That knife shouldn’t be able to do that and the manufacturer should be held liable.

    • MisterFrog@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      You can bet this training data was scraped from depraved recesses of the internet.

      The fact that OpenAI allowed this training data to be used*, as well as the fact that the guard-rails they put in place were inadequate, makes them liable in my opinion.

      *Obviously needs to be proven, in court, by subpoena.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      edit-2
      23 hours ago

      Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.

      Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.

        • Rivalarrival@lemmy.today
          link
          fedilink
          English
          arrow-up
          9
          ·
          20 hours ago

          All of what it says depends on the input.

          The user is not the only entity supplying input. The operators of the system provide the overwhelming majority of the input.

          It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.

          The operators of the system certainly possess intent, and are completely capable of manipulation.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          7 hours ago

          Even though your post was removed, I still feel like some points are worth a response.

          You said LLMs can’t lie or manipulate because they don’t have intent.

          Perhaps we don’t have good terminology to describe the thing that LLMs do all the time - even “hallucinating” attributes more mental process than these things have.

          But in the absence of more precision, “lying” is close enough. They are generating text that contains false statements.
          Note also that I didn’t use the term in my other comment anyway: your whole comment was strawmen, probably why it was removed.

          On your other point, Yes, crazy prompts do lead to crazy outputs - but that’s mostly because these things are designed to always cater to the user. An actual intelligence (and probably most people) would try to lead the user back to reality or to get help, or would just disengage.

          However, it’s also the case that non-crazy inputs too commonly lead to crazy outputs with LLMs.