If someone manufactured a knife that told schizophrenics they’re being followed and people are trying to kill them, then yeah. That knife shouldn’t be able to do that and the manufacturer should be held liable.
A nuclear football / knife are not stochastic parrots which capable of stringing coherent sentences together.
I agree llms are in the category of tools, but on the other hand they are no like any other tool and requires adjusting to which needs to happen too fast for most people
You can bet this training data was scraped from depraved recesses of the internet.
The fact that OpenAI allowed this training data to be used*, as well as the fact that the guard-rails they put in place were inadequate, makes them liable in my opinion.
*Obviously needs to be proven, in court, by subpoena.
Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
Even though your post was removed, I still feel like some points are worth a response.
You said LLMs can’t lie or manipulate because they don’t have intent.
Perhaps we don’t have good terminology to describe the thing that LLMs do all the time - even “hallucinating” attributes more mental process than these things have.
But in the absence of more precision, “lying” is close enough. They are generating text that contains false statements.
Note also that I didn’t use the term in my other comment anyway: your whole comment was strawmen, probably why it was removed.
On your other point, Yes, crazy prompts do lead to crazy outputs - but that’s mostly because these things are designed to always cater to the user. An actual intelligence (and probably most people) would try to lead the user back to reality or to get help, or would just disengage.
However, it’s also the case that non-crazy inputs too commonly lead to crazy outputs with LLMs.
Blame the knife
if the knife is a possessed weapon whispering to the holder, trying to convince them to use it for murder, blaming it may be appropriate
If someone manufactured a knife that told schizophrenics they’re being followed and people are trying to kill them, then yeah. That knife shouldn’t be able to do that and the manufacturer should be held liable.
If you held the nuclear football, would it speak to you too?
If someone programmed the nuclear football to be able to talk, we’d all recognize that as fucked up.
A nuclear football / knife are not stochastic parrots which capable of stringing coherent sentences together.
I agree llms are in the category of tools, but on the other hand they are no like any other tool and requires adjusting to which needs to happen too fast for most people
Removed by mod
You can bet this training data was scraped from depraved recesses of the internet.
The fact that OpenAI allowed this training data to be used*, as well as the fact that the guard-rails they put in place were inadequate, makes them liable in my opinion.
*Obviously needs to be proven, in court, by subpoena.
Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.
Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.
Removed by mod
The user is not the only entity supplying input. The operators of the system provide the overwhelming majority of the input.
The operators of the system certainly possess intent, and are completely capable of manipulation.
Even though your post was removed, I still feel like some points are worth a response.
You said LLMs can’t lie or manipulate because they don’t have intent.
Perhaps we don’t have good terminology to describe the thing that LLMs do all the time - even “hallucinating” attributes more mental process than these things have.
But in the absence of more precision, “lying” is close enough. They are generating text that contains false statements.
Note also that I didn’t use the term in my other comment anyway: your whole comment was strawmen, probably why it was removed.
On your other point, Yes, crazy prompts do lead to crazy outputs - but that’s mostly because these things are designed to always cater to the user. An actual intelligence (and probably most people) would try to lead the user back to reality or to get help, or would just disengage.
However, it’s also the case that non-crazy inputs too commonly lead to crazy outputs with LLMs.