Personally seen this behavior a few times in real life, often with worrying implications. Generously I’d like to believe these people use extruded text as a place to start thinking from, but in practice is seems to me that they tend to use extruded text as a thought-terminating behavior.

IRL, I find it kind of insulting, especially if I’m talking to people who should know better or if they hand me extruded stuff instead of work they were supposed to do.

Online it’s just sort of harmless reply-guy stuff usually.

Many people simply straight-up believe LLMs to be genie like figures as they are advertised and written about in the “tech” rags. That bums me out sort of in the same way really uncritical religiosity bums me out.

HBU?

  • flandish@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    5 months ago

    I respond in ways like we did when Wikipedia was new: “Show me a source.” … “No GPT is not a source. Ask it for its sources. Then send me the link.” … “No, Wikipedia is not a source, find the link used in that statement and send me its link.”

    If you make people at least have to acknowledge that sources are a thing you’ll find the issues go away. (Because none of these assholes will talk to you anymore anyway. ;) )

      • flandish@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        Yep. 100% aware. That’s one of my points - showing its fake. Sometimes enlightening to some folks.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        5 months ago

        Tracing and verifying sources is standard academic writing procedure. While you definitely can’t trust anything an LLM spits out, you can use them to track down certain types of sources more quickly than search engines. On the other hand, I feel that’s more of an indictment of the late-stage enshittification of search engines, not some special strength of LLMs. If you have to use one, don’t trust it, demand supporting links and references, and verify absolutely everything.

        • BroBot9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          5 months ago

          I’ll still ask the person shoving Ai slop in my face for a source or artist link just to shame these pathetic attempts to pass along slop and misinformation.

          Edit for clarity

  • Strider@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    A friend of mine who works in tech and is very well aware of what ‘AI’ is is a big fan. He runs his own bots and stuff for personal use and thinks he has the situation under control.

    While he is more and more relying on the ‘benefits’.

    My fear is that he will not be aware how his llm interpreted output might change him and it’s kind of a deal with the devil situation.

    I hope I am wrong.

  • Catoblepas@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 months ago

    It annoys me on social media, and I wouldn’t know how to react if someone did that in front of me. If I wanted to see what the slop machine slopped out I’d go slop-raking myself.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    9
    ·
    5 months ago

    If I reply with an AI summary or answer, I say so and fact check if need be. Nothing wrong with that.

    OTOH, lemmy thinks so. I replied with a long post from ChatGPT that was spot on and had included a couple of items I had forgotten, all factual.

    Lemmy: FUCK YOU!

    • stabby_cicada@slrpnk.net
      link
      fedilink
      arrow-up
      20
      ·
      edit-2
      5 months ago

      The thing is, I think, anybody who wants an answer from ChatGPT can ask ChatGPT themselves - or just Google it and get the AI answer there. People ask questions on social media because they want answers from real people.

      Replying to a Lemmy post with a ChatGPT generated answer is like replying with a link to a Google search page. It implies the question isn’t worth discussing - that it’s so simple the user should have asked ChatGPT instead of posting it. I agree with the OP - it’s frankly a little insulting.

      • hedgehog@ttrpg.network
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        There’s a difference between an answer from ChatGPT and an answer from ChatGPT that’s been reviewed by a person, particularly if that person is knowledgeable of the topic. ChatGPT isn’t deterministic, so if I go and ask ChatGPT the same thing, there’s no guarantee I’ll get an at all similar answer.

        The problem for me is that I have no way of knowing whether the person posting the ChatGPT response is or isn’t an expert and whether they actually reviewed the output. However that’s true of people in general, just replace “reviewing the output” with “not trolling,” so the effort to assess the utility of a comment is pretty similar.

  • BlackRoseAmongThorns@slrpnk.net
    link
    fedilink
    arrow-up
    7
    ·
    5 months ago

    It’s absolutely insulting and infuriating and i want to grab them and slap them more than a couple tomes.

    I’m first year into university, studying software engineering, and sometimes i like doing homework with friends, because calculus and linear algebra are hard on my brain and i specifically went to uni to understand the hard parts.

    Not once, not twice, have i asked for help with something from a friend, only for them to just open the dumbass chatbot, asking it how to solve the question, and just believing in the answer like it’s the moses coming down with the commandments, and then giving me the same explanation, full of orgasmic enthusiasm until i go “applying that theorem in the second step is invalid” or “this contradicts an earlier conclusion”, and then they shut their fucking brains off, tell monsieur shitbot his mistake, and again, explain to me like I’m a child (I’d say mansplaining because honest to god it looked and sounded the same but I’m also a man, so… ) word for word the bot output.

    This doesn’t stop at 3 or 4 times, i could only wish, sometime i got curious and burnt an hour like that with the same guy, on the same question, on the same prompt streak.

    Like after the 7th time they don’t understand that they are in more trouble than me and still talk like they have a phd.

    So I’ll sum up:

    • they turn off their brain
    • they make bot think for them
    • they believe bot like gospel
    • they swear bot knows best
    • they’re shown it does not know shit
    • “just one more prompt bro i swear bro please one more time bro i swear bro Claude knows how to solve calculus bro it has LaTeX bro so it probably knows bro please bro one more prompt bro-”
  • Aqarius@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    5 months ago

    Absolutely. People will call you a bot, then vomit out an argument ChatGPT have them without even reading it.

  • HollowNaught@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    More than a few of my work colleagues will search up something and then blindly trust the ai summary

    It’s infuriating

  • Ecco the dolphin@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    5 months ago

    It happened to me on Lemmy here

    Far too many people defended it. I could have asked an Ai myself, but I preferred a human, which is the point of this whole Lemmy thing

  • ThisIsNotHim@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 months ago

    Slightly different, but I’ve had people insist on slop.

    A higher up at work asked the difference between i.e. e.g. and ex. I answered, they weren’t satisfied and made their assistant ask the large language model. Their assistant reads the reply out loud and it’s near verbatim to what I just told them. Ugh

    This is not the only time this has happened

      • ThisIsNotHim@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 months ago

        I.e. is used to restate for clarification. It doesn’t really relate to the other two, and should not be used when multiple examples are listed or could be listed.

        E.g. and ex. are both used to start a list of examples. They’re largely equivalent, but should not be mixed. If your organization has a style guide consult that to check which to use. If it doesn’t, check the document and/or similar documents to see if one is already in use, and continue to use that. If no prior use of either is found, e.g. is more common.

        • deaddigger@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          5 months ago

          Thanks

          So i.e. would be like “the most useful object in the galaxy i.e. a towel”

          And eg would be like “companies e.g. meta, viatris, ehrmann, edeka” Right?

          • ThisIsNotHim@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 months ago

            Exactly. If you’ve got a head for remembering Latin, i.e. is id est, so you can try swapping “that is” into the sentence to see if it sounds right.

            E.g. is exempli gratia so you can try swapping “for example” in for the same trick.

            If you forget, avoiding the abbreviations is fine in most contexts. That said, I’d be surprised if mixing them up makes any given sentence less clear.

  • AbsolutelyNotAVelociraptor@sh.itjust.works
    link
    fedilink
    arrow-up
    44
    ·
    5 months ago

    Ffs, I had one of those at work.

    One day, we bought a new water sampler. The thing is pretty complex and requires from a licensed technician from the manufacturer to come and commission it.

    Since I was overseeing the installation and later I would be the person responsible of connecting it to our industrial network, I had quite a few questions about the device, some of them very specific.

    I swear the guy couldn’t give me even the most basic answers about the device without asking chatgpt. And at a certain point, I had to answer myself one question by reading the manual (that I downloaded on the go, because the guy didn’t have a paper copy of it) because chatgpt couldn’t give him an answer. This guy was someone hired by the company making the water sampler as an “expert”, mind you.

    • flandish@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      5 months ago

      assuming you were in meatspace with this person, I am curious, did they like… open gpt in mid convo with you to ask it? Or say “brb”?

      • AbsolutelyNotAVelociraptor@sh.itjust.works
        link
        fedilink
        arrow-up
        16
        ·
        5 months ago

        Since I was inspecting the device (it’s a somewhat big object, similar to a fridge), I didn’t realize at first because I wasn’t looking at him. I noticed the chat gpt thing when, at a certain question, I was standing next to him and he shamelessly, with the phone in hand, typed my question on chatgpt. That was when he couldn’t give me the answer and I had to look for the product manual on the internet.

        Funniest thing was when I asked something I couldn’t find in the manual and he told me, and I quote, “if you manage to find out, let me know the answer!”. Like, dude? You are the product expert? I should be the one saying that to you, not the other way!

  • lemmyknow@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    I’ve used an LLM once in a silly discussion. We were playing some game, and I had lost. But I argued not. So to prove I was factually correct, I asked an LLM. It did not exactly agree with me, so I rephrased my request, and it then agreed with me, which I used as proof I was right. The person I guess bought it, but it wasn’t anything that important (don’t recall the details)

  • lapes@lemmy.zip
    link
    fedilink
    arrow-up
    16
    ·
    5 months ago

    I work in customer support and it’s very annoying when someone pastes generic GPT advice on how I should fix their issue. That stuff is usually irrelevant or straight up incorrect.

  • Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    i’ve had friends and colleagues I respect, who I really thought would know better, do this.

    to say it bums me out would be a massive understatement :(

  • galoisghost@aussie.zone
    link
    fedilink
    arrow-up
    21
    ·
    5 months ago

    The worst thing is when you see that the AI summary is then repeated word for word on content farm sites that appear in the result list. You know that’s just reinforcing the AI summary validity to some users.