• Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 month ago

    If they mistake those electronic parrots for conscious intelligencies, they probably won’t be the best judges for rating such things.

  • rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 month ago

    An Alarming Number of Anyone Believes Fortune Cookies

    Just … accept it, superstition is in human nature. When you take religion away from them, they need something, it’ll either be racism/fascism, or expanding conscience via drugs, or belief in UFOs, or communism at least, but they need something.

    The last good one was the digital revolution, globalization, world wide web, all that, no more wars (except for some brown terrorists, but the rest is fine), everyone is free and civilized now (except for those with P*tin as president and other such types, but it’s just an imperfect democracy don’t you worry), SG-1 series.

    Anything changing our lives should have an intentionally designed religious component, or humans will improvise that where they shouldn’t.

  • shiroininja@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    1 month ago

    I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.

    AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.

    • cornshark@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      10
      ·
      1 month ago

      Chatgpt disagrees that it’s a yes-man:

      To a certain extent, AI is like a societal “yes man.” It reflects and amplifies patterns it’s seen in its training data, which largely comes from the internet—a giant digital mirror of human beliefs, biases, conversations, and cultures. So if a bubble dominates online, AI tends to learn from that bubble.

      But it’s not just parroting. Good AI models can analyze, synthesize, and even challenge or contrast ideas, depending on how they’re used and how they’re prompted. The danger is when people treat AI like an oracle, without realizing it’s built on feedback loops of existing human knowledge—flawed, biased, or brilliant as that may be.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    39
    arrow-down
    2
    ·
    1 month ago

    Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

    The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.

    But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.

    • AmidFuror@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      1 month ago

      Not the fault of prior generations, either. They were raised by their parents, and them by their parents, and so on.

      Sometime way back there was a primordial multicellular life form that should have known better.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        That’s a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.

      • Traister101@lemmy.today
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 month ago

        The main point here (which I think is valid despite my status as a not in this group Gen Z) is that we’re still like really young? I’m 20 dude, it’s just not my or my friends fault that school failed us. The fact it failed us was by design and despite my own and others complaints it’s continued to fail the next generation and alpha is already, very clearly struggling. I really just don’t think there’s much ground to argue about how Gen Z by and large should somehow know better. The whole point of the public education system is to ensure we well educate our children, it’s simply not my or any child’s fault that school is failing to do so. Now that I’m an adult I can, and I do push for improved education but clearly people like me don’t have our priorities straight seeing who got elected…

        • setsubyou@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago

          Tbh, I’m in my 40ies and I don’t think my education was so much better than what younger generations are getting. I’m a software engineer and most of the skills I need now are not skills I learned in school or even university.

          I started learning programming when I was 9 because my father gave me his old Apple II computer to see what I would do. At the time, this was a privilege. Most children did not get that kind of early exposure. It also made me learn some English early.

          In high school, we eventually had some basic programming classes. I was the guy the teacher asked when something didn’t work. Or when he forgot where the semicolons go in Pascal. During one year, instead of programming, there was a pilot project where we’d learn about computer aided math using Waterloo Maple that just barely ran on our old 486es. That course was great but after two months the teacher ran out of things to teach us because the math became “too advanced for us”.

          And yes the internet existed at the time; I had access to it at home starting 1994. We learned nothing about it in school.

          When I first went to university I had an Apple PowerBook that I bought from money I earned myself. Even though I worked for it, this was privilege too; most kids couldn’t afford what was a very expensive laptop then, or any laptop. But the reason I’m bringing it up is that my university’s web site at the time did not work on it. They had managed to implement even simple buttons that could have been links as Java applets that only worked on Windows. Those were the people I was supposed to learn computer science from. Which, by the way, at the time still meant “math with a side of computer science”. My generation literally could not study in an IT related field if we couldn’t understand university major level math (this changed quickly in the following years in my country, but still).

          So while I don’t disagree about education having a lot of room for optimization, when it comes to more recent technologies like AI, it also makes me a bit salty when all of the blame is assigned to education. The generations currently in education, at least in developed countries, have access to so much that my generation only had when our parents were rich or at least nerds like my father (he was a teacher, so we were not rich). And yet, sometimes it feels like they just aren’t interested in doing anything with this access. At least compared to what I would have done with it.

          At the same time, also keep in mind that when you say things like education doesn’t prepare us for AI or whatever other new thing (it used to be just the internet, or “new media”, before), those who you are expecting the education from are the people of my generation, who did not grow up with any of this, and who were not taught about any of it when we were young. We don’t have those answers either… this stuff is new for everyone. And for the people you expect to teach, it’s way more alien than it is for you. This was true when I went to school too, and I think it’s inevitable in a world moving this fast.

        • Zorque@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          Failure often comes at multiple points, it doesn’t just fail at one. It’s a failure of education, of social pressures, of lack of positive environments, and yes of choice. The problem with free will is that you have the chance to choose wrong. You can blame everyone in the world, but if you don’t take accountability for your own actions and choices, nothing will change.

          There has never been a time with as much access to information as now. While there as much, likely more, misinformation… that does not mean individuals have no culpability for their own lack of knowledge or understanding.

          That doesn’t mean it’s exclusively their fault, or even anywhere near a majority. But that does not mean they lose all free will for their own actions. It does not mean they have no ability to be better.

          Should we place the weight of the world on their shoulders? Absolutely not, that is liable to break them. But we also shouldn’t hide them from the burden of their own free will. That only weakens them.

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            1 month ago

            I find it unfair to blame my peers for things largly out of their control. If you are born into an abusive family you’ll only ever know if you happen to luck into the information that such behavior is unhealthy. Is it some of their faults? Certainly, I know people who are willfully stupid and refuse to learn but even knowing these people I feel pretty uncomfortable blaming them for it. I’ve talked to them, I’ve educated them on stuff they were willfully ignorant of and do you know what it generally boils down to? School has taught them that learning things is hard and a waste of their time. They’d rather waste hours trying to get an LLM to generate a script for them than sit down and figure out how to do it despite knowing I’d happily help them.

            School has managed to taint “learning” in the minds of many of my peers to such an extent that it should be avoided at any cost. School has failed us, is still failing the current generation and nothing is going to be done about it because it’s working as it’s meant to. This is the intended outcome. Like genuinely the scale of the fuckup is to the extent that enjoying reading is not just rare but seen as weird. We’ve managed to take one of the best ways to educate yourself and instill dread in our children when it’s brought up. How do we expect people who’ve been taught to hate reading to just magically turn around and unfuck themselves? What’d they see a really motivating Tik Tok or some shit? I despise that platform but like seriously you older people just don’t it man. Been complaing since middle school and now people wanna turn around and blame us as if it’s some personal failing it’s fucked up dude. Our education sucks, has sucked and will continue to suck even worse until we stop pretending like this is some kind of personal failing.

            • Zorque@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              Life is unfair, but unless we acknowledge our own failings it will never get better.

              You want to walk through life blaming everyone else for everything that goes wrong in your life and take no responsibility for your own actions? Feel free. But just know nothing will ever get better for you.

              I even acknowledge, multiple times, that it is not solely the fault of the person. But that does not mean they have no will of their own, no ability to change their circumstances. Sometimes that freedom is not enough, but unless you do something to take charge of your own life, again, nothing will ever change.

    • Goldholz @lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      This. I also see lots of things feeling like “oooooh these young people!” Also covid would have been gen alpha. Gen Z is mostly in their 20s now

  • Death_Equity@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    13
    ·
    1 month ago

    They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past “reboot it”.

    How they don’t understand that a LLM can’t be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.

  • coffeeismydrug@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 month ago

    to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists

  • dissipatersshik@ttrpg.network
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    9
    ·
    1 month ago

    Why are we so quick to assume machines cannot achieve consciousness?

    Unless you can point to me the existence of a spirit or soul, there’s nothing that makes our consciousness unique from what computers are capable of accomplishing.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      1 month ago

      This is not claiming machines cannot be conscious ever. This is claiming machines aren’t conscious right now.

      LLMs are like databases with a huge list of distances allowing you to find the “shortest” (aka most likely) distance to the next word. It’s literally little more than that.

      One day true AI might exist. One day perhaps… But not today.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 month ago

      I don’t doubt the possibility but current AI tech, no.

      • shiroininja@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 month ago

        It’s barely even AI. The amount of faith people have in these glorified search engines and image generators Lmao

        • jaemo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          It’s literally peaks and valleys of probability based on linguistic rules. That’s it. It is what’s referred to as a “Chinese room” in thought experiments.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          I don’t have a leg to stand on calling anything “barely AI” given what us gamedevs call AI. Like a 1d affine transformation playing pong.

          It’s beating your ass, there, isn’t that intelligent enough for you?

          • Warehouse@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            A calculator can multiply 2887618 * 99289192 faster than you ever could. Does that make a calculator intelligent?

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              It’s not an agent with its own goals so in the gamedev definition, no. By calculator standards, also not. But just as a washing machine with sufficient smarts is called intelligent, so it’s, in principle, possible to call a calculator intelligent if it’s smart enough. WolframAlpha certainly qualifies. And not just the newfangled LLM-enabled stuff I used Mathematica back in the early 00s and it blew me the fuck away. That thing is certainly better at finding closed forms than me.

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    1 month ago

    This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.

    The call is coming from inside the house mechanical Turk!

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.

      Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.

    • dissipatersshik@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      1 month ago

      I mean, it’s like none of you people ever consider how often humans are wrong when criticizing AI.

      How often have you looked for information from humans and have been fed falsehoods as though they were true? It happens so much we’ve just gotten used to filtering out the vast majority of human responses because most of them are incorrect or unrelated to the subject.

  • shaggyb@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    2
    ·
    1 month ago

    I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.

    • cornshark@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      1 month ago

      Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys

      • Hobo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.

  • wagesj45@fedia.io
    link
    fedilink
    arrow-up
    22
    arrow-down
    7
    ·
    1 month ago

    That’s a matter of philosophy and what a person even understands “consciousness” to be. You shouldn’t be surprised that others come to different conclusions about the nature of being and what it means to be conscious.

    • 0x01@lemmy.ml
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      1 month ago

      Consciousness is an emergent property, generally self awareness and singularity are key defining features.

      There is no secret sauce to llms that would make them any more conscious than Wikipedia.

      • Muad'dib@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        1 month ago

        Consciousness comes from the soul, and souls are given to us by the gods. That’s why AI isn’t conscious.

        • 0x01@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          How do you think god comes into the equation? What do you think about split brain syndrome in which people demonstrate having multiple consciousnesses? If consciousness is based on a metaphysical property why can it be altered with chemicals and drugs? What do you think happens during a lobotomy?

          I get that evidence based thinking is generally not compatible with religious postulates, but just throwing up your hands and saying consciousness comes from the gods is an incredibly weak position to hold.

          • Muad'dib@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 month ago

            I respect the people who say machines have consciousness, because at least they’re consistent. But you’re just like me, and won’t admit it.

        • 0x01@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Likely a prefrontal cortex, the administrative center of the brain and generally host to human consciousness. As well as a dedicated memory system with learning plasticity.

          Humans have systems that mirror llms but llms are missing a few key components to be precise replicas of human brains, mostly because it’s computationally expensive to consider and the goal is different.

          Some specific things the brain has that llms don’t directly account for are different neurochemicals (favoring a single floating value per neuron), synaptogenesis, neurogenesis, synapse fire travel duration and myelin, neural pruning, potassium and sodium channels, downstream effects, etc. We use math and gradient descent to somewhat mirror the brain’s hebbian learning but do not perform precisely the same operations using the same systems.

          In my opinion having a dedicated module for consciousness would bridge the gap, possibly while accounting for some of the missing characteristics. Consciousness is not an indescribable mystery, we have performed tons of experiments and received a whole lot of information on the topic.

          As it stands llms are largely reasonable approximations of the language center of the brain but little more. It may honestly not take much to get what we consider consciousness humming in a system that includes an llm as a component.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            a prefrontal cortex, the administrative center of the brain and generally host to human consciousness.

            That’s an interesting take. The prefrontal cortex in humans is proportionately larger than in other mammals. Is it implied that animals are not conscious on account of this difference?

            If so, what about people who never develop an identifiable prefrontal cortex? I guess, we could assume that a sufficient cortex is still there, though not identifiable. But what about people who suffer extensive damage to that part of the brain. Can one lose consciousness without, as it were, losing consciousness (ie becoming comatose in some way)?

            a dedicated module for consciousness would bridge the gap

            What functions would such a module need to perform? What tests would verify that the module works correctly and actually provides consciousness to the system?

    • Sixty@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 month ago

      If it was actually AI sure.

      This is an unthinking machine algorithm chewing through mounds of stolen data.

    • Vanilla_PuddinFudge@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      1 month ago

      Are we really going to devil’s advocate for the idea that avoiding society and asking a language model for life advice is okay?

      • thiseggowaffles@lemmy.zip
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        7
        ·
        1 month ago

        It’s not devil’s advocate. They’re correct. It’s purely in the realm of philosophy right now. If we can’t define “consciousness” (spoiler alert: we can’t), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We’re dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

        And honestly, if the CIA’s papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can’t rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it’s even “artificial” to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don’t fully understand.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 month ago

          The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we’d know we had it 😉

          • thiseggowaffles@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            1 month ago

            What do you mean? I don’t follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              1 month ago

              It’s an answer on if one is sure if they are not just a fancy autocomplete.

              More directly; we can’t be sure if we are not some autocomplete program in a fancy computer but since we’re having an experience then we are conscious programs.

              • thiseggowaffles@lemmy.zip
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                1 month ago

                When I say “how can you be sure you’re not fancy auto-complete”, I’m not talking about being an LLM or even simulation hypothesis. I’m saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren’t causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That’s how auto-complete works. It’s just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it’s sensory data instead of a text prompt, but the mechanics remain the same.

                And how do we know whether or not the LLM is having an experience or not? Again, this is the “hard problem of consciousness”. There’s no way to quantify consciousness, and it’s only ever experienced subjectively. We don’t know the mechanics of how consciousness fundamentally works (or at least, if we do, it’s likely still classified). Basically what I’m saying is that this is a new field and it’s still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.

  • ERROR: UserNotFound@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 month ago

    The machines in Detroit: Become Human are not alive, its a corporate botnet.

    The “Androids win” ending is a bad ending. They will will vote in favor of corporate interests, since they are secretly controlled by the elites.