Looks so real !

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    4
    ·
    4 days ago

    As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.

    • finitebanjo@piefed.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      4 days ago

      We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.

      I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        4 days ago

        Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          3 days ago

          To be fair there’s zero evidence for anything outside our own subjective experience of it, we’re just kind of running with the assumption that our subjective experience is an accurate representation of reality.

        • finitebanjo@piefed.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          3 days ago

          We’ve actually got a pretty good understanding of the human brain, we just don’t have the tech that could replicate it with any sort of budget nor a use case for it. Spoiler, there is no soul.

  • Tracaine@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    4 days ago

    I don’t expect it. I’m going to talk to the AI and nothing else until my psychosis hallucinates it.

  • mhague@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    4 days ago

    It’s like how most of you consume things that are bad and wrong. Hundreds of musicians that are really just a couple dudes writing hits. Musicians that pay to have their music played on stations. Musicians that take talent to humongous pipelines and churn out content. And it’s every industry, isn’t it?

    So much flexing over what conveyor belt you eat from.

    I’ve watched 30+ years of this slop. And now there’s ai. And now people that have very little soul, who put little effort into tuning their consumption, they get to make a bunch of noise about the lack of humanity in content.

    • finitebanjo@piefed.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Although, if a person knowing the context still acts confused when people complain about AI, its about as honest as somebody trying to solve for circumference with an apple pie.

  • MercuryGenisus@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    3 days ago

    Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:

    Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”

    The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.

    • Sconrad122@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      3 days ago

      Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data

      • Aeri@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        3 days ago

        Sometimes. I feel like LLM technology and it’s relationship with humans is a symptom of how poorly we treat each other.

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    4 days ago

    The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”

  • finitebanjo@piefed.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    4 days ago

    And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.

        • lauha@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          3 days ago

          You are comparing AI to a person who wrote a dictionary i.e. a domain experts. Take an average person from a street and they’ll write the same slop as current AIs

          • finitebanjo@piefed.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            3 days ago

            But you wouldn’t hire a random person from a street to write the dictionary. You wouldn’t hire a nonspecialist to do anything. If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions. You could at least expect a person to be capable of learning or growing. You cannot expect that from an AI.

            AI have no use case.

            • FridaySteve@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              3 days ago

              It sorts data and identifies patterns and trends. You may be referring only to AI enabled LLMs tasked with giving unique and creative output which isn’t going to give you any valuable results.

            • lauha@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              2 days ago

              If you did, you could at least expect a human to learn and grow, or have a bare minimum standard for ethics, morals, or responsibility for their actions.

              Some do, but you are somehow ignoring the currently most talked about person in the world, the president of the united states. And the party in power. And all the richest men in the world. And literally all the large corporations.

              The problem is you are not looking for AI to be average human. You are looking for the domain expert of literally everything and behavior of the best us, but trained on the behaviour of average of all of us.

              • finitebanjo@piefed.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Lmao, this tech bro is convinced only a minority of people have any learning capacity.

                The Republicans were all trained with carrots and sticks, too.

      • peopleproblems@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        4 days ago

        Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        I tried to submit an SCP once but theres a “review process” and it boils down to only getting in by knowing somebody who is in.

  • Alph4d0g@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    4 days ago

    A difference in definition of consciousness perhaps. We’ve already seen signs of self preservation in some cases. Claude resorting to blackmail when being told it was going to be retired and taken offline. This might be purely mathematical and algorithmic. Then again the human brain might be nothing more than that as well.

    • AmbiguousProps@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 days ago

      This might be purely mathematical and algorithmic.

      There’s no might here. It is not conscious. It doesn’t know anything. It doesn’t do anything without user input.

      That ““study”” was released by the creators of Claude, Anthropic. Anthropic, like other LLM companies, get their entire income based on the idea that LLMs are conscious, and can think better than you can. The goal, like with all of their published ““studies””, is to get more VC money and paying users. If you start to think about it that way every time they say something like “the model resorted to blackmail when we threatened to turn it off”, it’s easy to see through their bullshit.

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    4
    ·
    4 days ago

    Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.

    • peopleproblems@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.

      I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.

      It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.

      • ji59@hilariouschaos.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        4 days ago

        Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?

        • LesserAbe@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

          The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

          The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.

            • LesserAbe@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              3 days ago

              I’m just a person interested in / reading about the subject so I could be mistaken about details, but:

              When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.

              When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.

              Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”

              And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.

              You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.

              A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.

              Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.

              • ji59@hilariouschaos.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                I would say that artificial neuron nets try to mimic real neurons, they were inspired by them. But there are a lot of differences between them. I studied artificial intelligence, so my experience is mainly with the artificial neurons. But from my limited knowledge, the real neural nets have no structure (like layers), have binary inputs and outputs (when activity on the inputs are large enough, the neuron emits a signal) and every day bunch of neurons die, which leads to restructurizing of the network. Also from what I remember, short time memory is “saved” as cycling neural activities and during sleep the information is stored into neurons proteins and become long time memory. However, modern artificial networks (modern means last 40 years) are usually organized into layers whose struktuře is fixed and have inputs and outputs as real numbers. It’s true that the context is needed for modern LLMs that use decoder-only architecture (which are most of them). But the context can be viewed as a memory itself in the process of generation since for each new token new neurons are added to the net. There are also techniques like Low Rank Adaptation (LoRA) that are used for quick and effective fine-tuning of neural networks. I think these techniques are used to train the specialized agents or to specialize a chatbot for a user. I even used this tevhnique to train my own LLM from an existing one that I wouldn’t be able to train otherwise due to GPU memory constraints.

                TLDR: I think the difference between real and artificial neuron nets is too huge for memory to have the same meaning in both.

        • SkavarSharraddas@gehirneimer.de
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are “just” language, they don’t have sensory experiences, they don’t process the world, especially not continuously.

          Do they want to preserve themselves? Or do they regurgitate sci-fi novels about “real” AIs not wanting to be shut down?

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.

            Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.

            • LesserAbe@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              3 days ago

              I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I’m continuously receiving input from my “camera” and “microphones” as long as I’m awake)

  • HazardousBanjo@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    I think you’d have less dumb ass average Joes cumming over AI if they could understand that regardless as to whether or not the AI wave crashes and burns, the CEOs who’ve pushed for it won’t feel the effects of the crash.

  • Lightfire228@pawb.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    I suspect Turing Complete machines (all computers) are not capable of producing consciousness

    If that were the case, then theoretically a game of Magic the Gathering could experience consciousness (or similar physical systems that can emulate a Turing Machine)

    • nednobbins@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      Most modern languages are theoretically Turing complete but they all have finite memory. That also keeps human brains from being Turing complete. I’ve read a little about theories beyond Turing completeness, like quantum computers, but I’m not aware of anyone claiming that human brains are capable of that.

      A game of Magic could theoretically do any task a Turing machine could do but it would be really slow. Even if it could “think” it would likely take years to decide to do something as simple as farting.

      • Lightfire228@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        I don’t think the distinction between “arbitrarily large” memory and “infinitely large” memory here matters

        Also, Turing Completeness is measuring the “class” of problems a computer can solve (eg, the Halting Problem)

        I conjecture that whatever the brain is doing to achieve consciousness is a fundamentally different operation, one that a Turing Complete machine cannot perform, mathematically


        Also also, quantum computers (at least as i understand them, which is, not very well) are still Turing Complete. They just use analog properties of quantum wave functions as computational components

        • nednobbins@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          There’s a real vs theoretical distinction. Turing machines are defined as having infinite memory. Running out of memory is a big issue that prevents computers from solving problems that Turing machines should be able to solve.

          The halting problem, a bunch of problems involving prime numbers, a bunch of other weird math problems are all things that can’t be solved with Turing machines. They can all sort of be solved in some circumstances (eg A TM can correctly classify many programs as either halting or not halting but there are a bunch of edge cases it can’t figure out, even with infinite memory).

          From what I remember, most researchers believe that human brains are Turing Complete. I’m not aware of any class of problem that humans can solve that we don’t think are solvable by sufficiently large computers.

          You’re right that Quantum Computers are Turing Complete. They’re just the closest practical thing I could think of to something beyond it. They often let you knock down the Big Oh relative to regular computers. That was my point though. We can describe something that goes beyond TC (like “it can solve the halting lemma”) but there don’t seem to be any examples of them.

          • Lightfire228@pawb.social
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 day ago

            I’m not aware of any class of problem that humans can solve that we don’t think are solvable by sufficiently large computers.

            That is a really good point…hrmmm

            My conjecture is that some “super Turing” calculation is required for consciousness to arise. But that super Turing calculation might not be necessary for anything else like logic, balance, visual processing, etc

            However, if the brain is capable of something super Turing, I also don’t see why that property wouldn’t translate to super Turing “higher order” brain functions like logic…

            • nednobbins@lemmy.zip
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              We certainly haven’t ruled out the possibility that the human brain is capable of some sort of “super Turing” calculations. That would lead me to 2 questions;

              1. Can we devise some test to show this? If we expand our definition of “test” to include anything we can measure, directly or indirectly, through our senses?

              2. What do we think is the “magic” ingredient that allows humans to engage in “super turing” activities, that a computer doesn’t have? eg Are carbon compounds inherently more suited to intelligence than silicon compounds?