Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • btaf45@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    4 months ago

    [ “I am a disgrace to my profession,” Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]

    This should tell us that AI thinks as a human because it is trained on human words and doesn’t have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn’t have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won’t always provoke this mimicry in an AI. Because when humans feel emotions they don’t always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn’t have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.

    • Harbinger01173430@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      AI from the biggo cyberpunk companies that rule us sound like a human most of the time because it’s An Indian (AI), not Artificial Intelligence

    • ganryuu@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      4 months ago

      You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.

      • btaf45@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Can you explain why AIs always have a “confidently incorrect” stance instead of admitting they don’t know the answer to something?

        • Matty_r@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Because its an auto complete trained on typical responses to things. It doesn’t know right from wrong, just the next word based on a statistical likelihood.

            • Matty_r@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 months ago

              Exactly. I’m over simplifying it of course, but that’s generally how it works. Its also not “AI” as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.

              Edit: just wanted to clarifying I’m talking about LLMs like ChatGPT etc

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.

          Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.

          • btaf45@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            Again, they don’t know if they know the answer

            Then in that respect AIs aren’t even as powerful as an ordinary computer program.

            say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not.

            That was my guess too.

            • Aggravationstation@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              Then in that respect AIs aren’t even as powerful as an ordinary computer program.

              No computer programs “know” anything. They’re just sets of instructions with varying complexity.

              • btaf45@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                4 months ago

                No computer programs “know” anything.

                Can you stop with the nonsense? LMFAO…

                if exists(thing) {
                write(thing);
                } else {
                write(“I do not know”);
                }

                • Aggravationstation@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 months ago

                  if exists(thing) {
                  write(thing);
                  } else {
                  write(“I do not know”);
                  }

                  Yea I see what you mean, I guess in that sense they know if a state is true or false.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    4 months ago

    Honestly, Gemini is probably the worst out of the big 3 Silicon Valley models. GPT and Claude are much better with code, reasoning, writing clear and succinct copy, etc.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        The overall interface can, which leads to fun results.

        Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it’s actually delegating the image. It just lies and says “I” am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.

        A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Yes, and this is pretty common with tools like Aider — one LLM plays the architect, another writes the code.

        Claude code now has sub agents which work the same way, but only use Claude models.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 months ago

      I always hear people saying Gemini is the best model and every time I try it it’s… not useful.

      Even as code autocomplete I rarely accept any suggestions. Google has a number of features in Google cloud where Gemini can auto generate things and those are also pretty terrible.

      • Jesus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I don’t know anyone in the Valley who considers Gemini to be the best for code. Anthropic has been leading the pack over the year, and as a results, a lot of the most popular development and prototyping tools have been hitching their car to Claude models.

        I imagine there are some things the model excels at, but for copy writing, code, image gen, and data vis, Google is not my first choice.

        Google is the “it’s free with G suite” choice.

        • panda_abyss@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          There’s no frontier where I choose Gemini except when it’s the only option, or I need to be price sensitive through the API

          • Jesus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Interesting thing is that GPT 5 looks pretty price competitive with . It looks like they’re probably running at a loss to try to capture market share.

            • panda_abyss@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I think Google’s TPU strategy will let them go much cheaper than other providers, but its impossible to tell how long they last and how long it takes to pay them off.

              I have not tested GPT5 thoroughly yet

        • cley_faye@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Don’t mention it! I’m glad I could help you with that.

          I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.

          /j, obviously. I hope.

      • katy ✨@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        me and my friend used to make them all the time :] i also went to summer computer camp for basic on old school radio shack computers :3

    • ThePowerOfGeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      High five, me too!

      At that age I also used to do speed run little programs on the display computers in department stores. I’d write a little prompt welcoming a shopper and ask them their name. Then a response that echoed back their name in some way. If I was in a good mood it was “Hi [name]!”. If I was in a snarky mood it was “Fuck off [name]!” The goal was to write it in about 30 seconds, before one of the associates came over to see what I was doing.

    • BD89@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Shit at the rate MasterCard and Visa and Stripe want to censor everything and parent adults we might not even ever get GTA6.

      I’m tired man.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 months ago

    S-species? Is that…I don’t use AI - chat is that a normal thing for it to say or nah?

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 months ago

    Suddenly trying to write small programs in assembler on my Commodore 64 doesn’t seem so bad. I mean, I’m still a disgrace to my species, but I’m not struggling.

        • funkless_eck@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway

          • HugeNerd@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Sure, mostly to make up for the shortcomings of BASIC 2.0. You could use a bunch of different approaches for easier programming, like cartridges with BASIC extensions or other utilities. The C64 BASIC for example had no specific audio or graphics commands. I just do this stuff out of nostalgia. For a few hours I’m a kid again, carefree, curious, amazed. Then I snap out of it and I’m back in WWIII, homeless encampments, and my failing body.