• Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    18 days ago

    Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.

    Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.

    It doesn’t even approach discussing/satirizing a relevant issue with them.

    It’s basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.

    • Sunsofold@lemmings.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      18 days ago

      No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.

      • Credibly_Human@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        18 days ago

        No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning.

        This has to be the least informed take I have seen on anything ever. It literally dismisses all the most important issues with AI and pretends that the “real” problem (as if there is only one that matters) is about people misunderstanding it in a way I see no one doing.

        It’s clear to me you must be so deep into an anti AI bubble you have no idea how people who use AI think about it, how its used, why its used, or what the problems with it are.

        • Sunsofold@lemmings.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          17 days ago

          What do you think the most important issues with AI are? I see a lot of ‘you’re wrong’ but no indication as to how or why.

          • Credibly_Human@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            17 days ago

            Why would I need to give you a list to point out what is wrong with your statement.

            They’re obvious though.

            • Copyright issues with the sale of ai services

            • Worker displacement without proper social systems to manage them

            • Unclear biases within black box systems

            • The requirement to change education based on their existence

            • The environmental damage caused through the energy used in training.

            The list is long quite frankly. Longer than this even.

            • Sunsofold@lemmings.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              16 days ago

              Why…?

              Because why bother saying anything if you aren’t going to say anything? Offering correct information gives the other person a chance to correct and improve. Just saying ‘WRONG!’ is just a slap in the face that only serves to let you feel superior, masturbatory pretense.

              As for the rest, those are all clearly issues, but none of them are of a sort where handling the one I raised and handling them are mutually exclusive. And at least the second item is actually a following point from the one I mentioned. People being tricked into thinking LLMs are capable of thought contributes to the thought by decision-makers that people can simply be replaced. Viewing the systems as intelligent is a big part of what makes people trust them enough to blindly accept biases in the results. Ideally, I’d say AI should be kept purely in the realm of research until it’s developed enough for isolated use as a tool but good luck getting that to happen. Post hoc adjustments are probably the best we can hope for and my little suggestion is a fun way to at least try to mitigate some of the effects. It’s certainly more reasonably likely to address some element of the issues than just saying ‘WRONG!’

              The fun part is, while the issues you mentioned all have the possibility of creating broad, hard to define harm if left unchecked, there are already examples of direct harm coming from people treating LLM outputs as meaningful.