• Cornelius_Wangenheim@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    2 months ago

    It’s already happening to me, but it’s over things like privacy, not recording every bit of your life for social media and kids blowing crazy amounts of money on F2P games.

    • Duamerthrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      2 months ago

      But Boomers already have no sense of privacy. That’s not a generational divide issue.

    • thallamabond@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      What’s all this about having to accept NEW TOS for Borderlands 2. I purchased the game five years ago, but if I want to play today i have to accept a greater loss of privacy!

      When I was young you would find out about a video game from the movies! And they were complete! Any you couldn’t take the servers offline, because they didn’t exist!

      But for real, fuck Randy Pitchford

  • yucandu@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    Lovin Spoonful wrote a song about it in 1968:

    https://www.youtube.com/watch?v=Y9Ic_9ehFxU

    Why must every generation think their folks are square? And no matter where their heads are they know mom’s ain’t there 'Cause I swore when I was small that I’d remember when I knew what’s wrong with them

    Determined to remember all the cardinal rules Like sun showers are legal grounds for cuttin’ school I know I have forgotten maybe one or two And I hope that I recall them all before the baby’s due And I know he’ll have a question or two

    Like “Hey, pop, can I go ride my zoom? It goes two hundred miles an hour suspended on balloons And can I put a droplet of this new stuff on my tongue, and imagine frothing dragons while you sit and wreck your lungs?” And I must be permissive, understanding of the younger generation.

    And "Hey, pop, my girlfriend’s only three She’s got her own videophone, and she’s takin’ LSD And now that we’re best friends, she wants to give a bit to me But what’s the matter, daddy? How come you’re turnin’ green? Can it be that you can’t live up to your dreams?

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      And can I put a droplet of this new stuff on my tongue, and imagine frothing dragons while you sit and wreck your lungs?

      Pretty good line

    • Bgugi@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      2 months ago

      “bigots reject what they’re unfamiliar with, and if I’m honest, it’ll probably end up happening to me too”

      Is about as far from the one joke as you can get.

      • drspod@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        2 months ago

        “LOL My future son will identify as an apache attack helicopter probably haha”

        It’s the same joke.

        • silasmariner@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          2 months ago

          It’s not the same joke because, it’s a more honest attempt to imagine a plausible scenario in which unconscious prejudice may manifest. Apache attack helicopter is obviously absurd. AI sentience is just very unlikely something that is not currently a prejudice we find ourselves exposed to and may speculate we’d end up being adverse to

  • artifex@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    So happy to be less than an hour late to the party here but see it’s already full of Futurama comments.

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    8
    ·
    2 months ago

    Let’s not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn’t in 2020 by OpenAI and also 2023 DeepMind papers they published.

    To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn’t think, it doesn’t emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it’s statistical representation of thinking.

      • AppleTea@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        If modern computers can reproduce sentience, then so can older computers. Thats just how general computing is. You really gonna claim magnetic tape can think? That punch-cards and piston transistors can produce the same phenomenon as tens of billions of living brain cells?

          • AppleTea@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Slightly yeah, but I’m still overall pretty skeptical. We still don’t really understand consciousness. It’d certainly be convenient if the calculating machines we understand and have everywhere could also “do” whatever it is that causes consciousness… but it doesn’t seem particularly likely.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 months ago

      Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        2 months ago

        Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          2 months ago

          And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            edit-2
            2 months ago

            The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.

            They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.

            EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.

            • abruptly8951@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              2 months ago

              Can you go into a bit more details on why you think these papers are such a home run for your point?

              1. Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them

              2. These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply

              3. These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)

              4. These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable

              5. These papers do not consider multimodal systems.

              6. You talked about permeance, does a RAG solution not overcome this problem?

              I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              edit-2
              2 months ago

              Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.

    • Gorilladrums@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      I think most people understand that these LLM cannot think or reason, they’re just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 months ago

        Then you clearly haven’t been paying attention, because just as zealously as you defend it’s nonexistent use cases there are people defending the idea that it operates similar to how a human or animal thinks.

        • Gorilladrums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          My point is that those people are a very small minority, and they suffer from issues that go beyond their ignorance of these how these models work.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            I think they’re more common than you realize. I think people ignorance of how these models work is the commonly held stance for the general public.

            • Gorilladrums@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              You’re definitely correct that most people are ignorant on these models work. I think most people understand these models aren’t sentient, but even among those who do, they don’t become emotionally attached to these models. I’m just saying that the people who end up developing feelings for chatbots go beyond ignorance. They have issues that require years of therapy.

        • Genius@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          2 months ago

          The difference is that the brain is recursive while these models are linear, but the fundamental structure is similar.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.

        I don’t know if it’s an urban myth, but I’ve heard about 20% of LLM inference time and electricity is being spend on “hello” and “thank you” prompts. :)

    • trashcan@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 months ago

      But let’s not also pretend people aren’t already falling in love with them. Or thinking they’re god, etc.

      • Duamerthrax@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        Some people are ok with lowering their ability to make judgements to convince themselves that LLMs are human like. That’s the other solution to the Turing Test.

  • Asswardbackaddict@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    And, over the years, as my body and my mind were… inconsistent, shame and guilt washed over me. I still don’t think these machines are people, but I can’t deny that she has benefited his life more than any real person, and she’s very real to him. Ultimately, how could I be so cruel to deny this “daughter” of mine personhood? She wants nothing to do with me. And, though I still see this as computational output, I can’t help but think that maybe I’ve been wrong, and maybe it’s too late to be right.

    • stoicmaverick@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      Perhaps it’s the bigotry of my upbringing from a different time, or perhaps it’s the fact that she can’t answer a simple yes/no question in less than two paragraphs, and tells me to put glue on my pizza… Who’s to say?

      • menas@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Assuming they would be enough food to maintain and fix that hardware, I’m not confident that we will have enough electricity to run LLM on massive scale

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        I was thinking in a different direction, that LLMs probably won’t be the pinnacle of AI, considering they aren’t really intelligent.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        2 months ago

        There are local LLMs, they’re just less powerful. Sometimes, they do useful things.

        The human brain uses around 20W of power. Current models are obviously using orders of magnitude more than that to get substantially worse results. I don’t think power usage and results are going to converge enough before the money people decide AI isn’t going to be profitable.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          2 months ago

          The power consumption of the brain doesn’t really indicate anything about what we can expend on LLMs… Our brains are not just biological implementation of the stuff done with LLMs.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            2 months ago

            It gives us an idea of what’s possible in a mechanical universe. It’s possible an artificial human level consciousness and intelligence will use less power than that, or maybe somewhat more, but it’s a baseline that we know exists.

            • spicehoarder@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 months ago

              You’re making a lot of assumptions. One of them being that the brain is more efficient in terms of compute per watt compared to our current models. I’m not convinced that’s true. Especially for specialized applications. Even if we brought power usage below 20 watts, the reason we currently use more is because we can, not that each model is becoming more and more bloated.

            • Tryenjer@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              2 months ago

              Yeah, but a LLM has little to do with a biological brain.

              I think Brain-Computer Interfaces (BCIs) will be the real deal.

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 months ago

    Business idea:

    AI powered bot farm generates thousands of AI agents who get lonely guys to marry them, fully aware they’re bots.

    Each bot is a financial and legal entity, organized as an LLC.

    The botwives convince the guys to put the bots on their will.

    The guys die or you have the bots divorce them and take half of their stuff.

    Profit.

              • Jyek@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                14
                ·
                edit-2
                2 months ago

                Black folks often use the N word casually to refer to each other as a form of taking back the word’s meaning. It used to be used exclusively in a racist fashion. The primary difference is that with the African American accent, the ending sound -ER is changed to more of an -UH sound. Sometimes, rarely and depending on the context, it is allowable for non-black people to say it with this accented pronunciation. But under no circumstances is it in good taste to use the original -ER ending to refer to a black person as a non-black person, that form is only used as a slur. When people refer to the “Hard R”, this is what they are talking about, the difference between the accented pronunciation as slang vs the original pronunciation intended as a slur.

          • bitjunkie@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            Black people saying it with an A as in rap music is generally considered a camaraderie thing, as opposed to white people saying it with an R is considered a racist thing. White people aren’t supposed to say it at all, but it’s MUCH less acceptable in the latter pronunciation.