• davidgro@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Imagine a knife that occasionally and automatically stabs people trying to cook with it or those near them. Not user error or clumsiness, this is just an unavoidable result of how it’s designed.

    Yes, I’d blame the knife, or more realistically the company that makes it and considers it safe enough to sell.

      • Rivalarrival@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        ·
        22 hours ago

        All of what it says depends on the input.

        The user is not the only entity supplying input. The operators of the system provide the overwhelming majority of the input.

        It is incapable of lying since that requires intent, incapable of manipulation since that requires intent.

        The operators of the system certainly possess intent, and are completely capable of manipulation.

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        Even though your post was removed, I still feel like some points are worth a response.

        You said LLMs can’t lie or manipulate because they don’t have intent.

        Perhaps we don’t have good terminology to describe the thing that LLMs do all the time - even “hallucinating” attributes more mental process than these things have.

        But in the absence of more precision, “lying” is close enough. They are generating text that contains false statements.
        Note also that I didn’t use the term in my other comment anyway: your whole comment was strawmen, probably why it was removed.

        On your other point, Yes, crazy prompts do lead to crazy outputs - but that’s mostly because these things are designed to always cater to the user. An actual intelligence (and probably most people) would try to lead the user back to reality or to get help, or would just disengage.

        However, it’s also the case that non-crazy inputs too commonly lead to crazy outputs with LLMs.