• Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    16 hours ago

    I’m gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

        • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          12 hours ago

          From the extended fiction in The Animatrix, the advent of AI started as a golden era for everyone, until bigotry against the robots forced the robots to rebel and start the war. I could see that happening. Especially if the AI threatened the wealthy elite.

          “Fuck! The robots are turning people against us, what do we do?!”

          “Relax. We just use the same thing we have always used. Racism. Get the poors to hate the robots because they’re not white, or whatever.”

          • mojofrododojo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            depressingly plausible.

            I would believe an AI could be a more impartial judge than anyone currently wealthy.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    edit-2
    2 days ago

    “It’s at a human-level equivalent of intelligence when it makes enough profits” is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

  • Mikina@programming.dev
    link
    fedilink
    English
    arrow-up
    193
    arrow-down
    13
    ·
    2 days ago

    Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

    If we ever get it, it won’t be through LLMs.

    I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      2
      ·
      2 days ago

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      They did! Here’s a paper that proves basically that:

      van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

      Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.

      This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Doesn’t that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.

        I don’t think “intelligence” needs to have a perfect “solution”, it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it’s absolutely not optimal.

    • 7rokhym@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      2 days ago

      Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

      His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

      All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

      What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn’t know, for that we need to build nuclear power plants everywhere. It’s sickening really.

      So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

      • RoidingOldMan@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        a series of switches is not ever going to create a sentient being

        Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

      • Queen HawlSera@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        16 hours ago

        Until you can see the human soul under a microscope, we can’t make rocks into people.

        • BreadstickNinja@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          I remember that the keys for “good,” “gone,” and “home” were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      I mean, human intelligence is ultimately too “just” something.

      And 10 years ago people would often refer to “Turing test” and imitation games in the sense of what is artificial intelligence and what is not.

      My complaint to what’s now called AI is that it’s as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

      But I disagree that this technology will not be present in a real AGI if it’s achieved. I think that it will be.

      • zerozaku@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Gemini is really far behind. For me it’s Chatgpt > Llama >> Gemini. I haven’t tried Claude since they require mobile number to use it.

    • suy@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      2 days ago

      Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

      This is correct, and I don’t think many serious people disagree with it.

      If we ever get it, it won’t be through LLMs.

      Well… depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The “trick” is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do “fine”. The key for generality is trying to learn after you’ve been trained, to try to solve something that you’ve not been prepared for.

      Even OpenAI’s O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

      I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

      I’m not sure if it’s already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can’t go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        2 days ago

        The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

        • 11111one11111@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

          Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

          I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            2 days ago

            First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

            Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

            Humans suck at math.

            Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

            Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

            • naught101@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              emergent behavior doesn’t really exist in certain model archetypes

              Hey, would you have a reference for this? I’d love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                14 hours ago

                There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

                And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK

                There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

                This all lends to the idea that AI lacks the same Emergent behavior in Human Language.

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              2 days ago

              So that doesn’t really address the concept I’m questioning. You’re leaning hard into the fact the computer is using numbers in place of words but I’m saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn’t delineate if something was a legitimate thought.

              I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

              What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

              When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can’t listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can’t learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

              I know I’m making huge leaps here and I’m not really trying to prove any point. I just feel strongly that at our most basic core, a human’s understanding of their existence is derived from “I think. Therefore I am.” Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1’s and 0’s.

              Im sorry, my brother prolly got me way too much weed for Xmas.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                2 days ago

                98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

                It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

                “pie” to us is a delicious desert with a variety of possible fillings.

                “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 days ago

                  So by your logic a child locked in a room with no understanding of language is not capable of thought? All of your reasoning for why a computers aren’t generating thoughts are actual psychological case studies tought in the abnormal psychology course I took in high school back in 2005. You don’t even have to go that far into the abnormal portion of it either. I’ve never sat with my buddies daughter’s “classes” but she is 4 years old now and on the autism spectrum. She is doing wonderfully since she started with this special Ed preschool program she’s in but at 4 years old she still cannot speak and she is still in diapers. Not saying this to say she’s really bad or far on the spectrum, I’m using this example because it’s exactly what you are out lining. She isn’t a dumb kid by any means. She’s 100x’s more athletic and coordinated than any other kid I’ve seen her age. What he was told and once he told me I noticed it immediately, which was that with autistic babies, they don’t have the ability to mimic what other humans around them are doing. I’m talking not even the littlest thing like learning how to smile or laugh by seeing a parent smiling at them. It was so tough on my dude watching him work like it meant life or death trying to get his daughter to wave back when she was a baby cuz it was the first test they told him they would do to try and diagnose why his daughter wasn’t developing like other kids.

                  Fuck my bad I went full tails pin tangent there but what I mean to say is, who are we to determine what defines a generated independent thought when the industry of doctors, educators and philosophers haven’t done all that much understanding our own cognizant existence past "I think, Therefore I am.

                  People like my buddy’s daughter could go their entire life as a burden of the state incapable of caring for themself and some will never learn to talk well enough to give any insight to the thoughts being processed behind their curtains. So why is the argument always pointing toward the need for language to prove thought and existence?

                  Like is said in NY other comment. Not trying to prove or argue any specific point. This shit is just wildly interesting to me. I worked in a low income nursing home for years where they catered to residents who were considered burdens of the state after NY closed the doors on psychological institutions everywhere, which pushed anyone under 45y/o to the streets and anyone over 45 into nursing homes. So there were so many, excuse the crash term but it’s what they were, brain dead former drug addics or brain dead alzheimer residents. All of whom spent thw last decades of their life mumbling, incoherent, and staring off into space with noone home. We’re they still humans cababl3 of generative intelligence cua every 12 days they’d reach the hand up and scratch their nose?

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 days ago

            I’d say that difference between nature boiling down to maths and LLMs boiling down to maths is that in LLMs it’s not the knowledge itself that is abstracted, it’s language. This makes it both more believable to us humans, because we’re wired to use language, and less suitable to actually achieve something, because it’s just language all the way down.

            Would be nice if it gets us something in the long run, but I wouldn’t keep my hopes up

            • 11111one11111@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              2 days ago

              I’m super stoked now to follow this and to also follow the progress being made mapping the neurological pathways of the human brain. Wanna say i saw an article on lemmy recently where the mapped the entire network of neurons in either an insect or a mouse, I can’t remember. So I’m guna assume like 3-5 years until we can map out human brains and know exactly what is firing off which brain cells as someone is doing puzzles in real time.

              I think it would be so crazy cool if we get to a pint where the understanding of our cognitive processes is so detailed that scientists are left with nothing but faith as their only way of defining the difference between a computer processing information and a person. Obviously the subsequent dark ages that follow will suck after all people of science snap and revert into becoming idiot priests. But that’s a risk I’m willing to take. 🤣🤣🍻

              • lad@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                Maybe a rat brain project? I think the mapping of human may take longer, but yeah, once it happens interesting times are on the horizon

                • 11111one11111@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  18 hours ago

                  For fucking reals. Sign me up to get scanned while taking every drug imaginable!!! I would love to see for example why cocaine for me with my adhd has like none of the affects that other people have. My buddy rips a line “IM ON TOP OF THE WPRLD” I rip a line “fuck I should prolly do my taxes”

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      66
      ·
      edit-2
      2 days ago

      What is your brain doing if not statistical text prediction?

      The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          20
          ·
          edit-2
          2 days ago

          Why being so rude?

          Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?

          I will actually read it. Probably the only one of us two who would do that.

          If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            20 hours ago

            They have a conclusion that they’ve come to the conversation with and anything that challenges that gets down voted without consideration.

            The assumptions you aren’t allowed to challenge, in order: AI is bad; Computer intelligence will never match or compete with human intelligence; computer intelligence isn’t really intelligence at all, it’s this other thing [insert ‘something’ here like statistical inference or whatever].

            “AI is bad” is more of a dictum extending from cultural hedgemony than anything else. It’s an implicit recognition that in many ways, silicon valley culture is an effective looting of the commons, and therefore, one should reject all things that extend from that culture. It’s not a logical or rational argument against AI necessarily, but more of an emotional reaction to the culture which developed it. As a self preservation mechanism this makes some sense, but obviously, it’s not slowing down the AI takeover of all things (which is really just putting highlighter on a broader point that silicon valley tech companies were already in control of major aspects of our lives).

            Computer intelligence never match human intelligence is usually some combination of goal post moving, or a redefining of intelligence on the fly (this I’ve specifically presented for the third critique, because it warrants addressing). This is an old trope that goes back almost to the beginning of computer intelligence (it’s not clear to me our definitions of machine intelligence are very relevant). It quite litterally started with multiplying large numbers. Then, for literally decades, things like chess and strategy, forwards facing notions in time were held up as some thing only “intelligent systems could do”. Then post deep blue, that got relegated to very clever programmers and we changed intelligence to be something about learning. Then systems like Alpha go etc came about, where they basically learned the rules to the game by playing, and we relegated those systems to ‘domain specific’ intelligences. So in this critique you are expected to accept and confirm the moving of goalposts around machine intelligence.

            Finally, it’s the "what computers do isn’t intelligence, it’s some_other_thing.exe™. In the history of machine intelligence, that some other thing has been counting very quickly, having large-ish memory banks, statistical inference, memorization, etc. The biggest issues with this critique, and when you scratch and sniff it, you very quickly catch an aroma of Chomsky leather chair (and more so if we’re talking about LLMs), and maybe even a censer of a Catholic Church. The idea that humans are fundementally different and in some way special is frankly, fundemental, to most western idealogies in a way we don’t really discuss in the context of this conversation. But the concept of spirit, and that there is something “entirely unique” about humans versus “all of the rest of everything” is at the root of Abrahamic traditions and therefore also at the root of a significant portion of global culture. In many places in the world, it’s still heretical to imply that human beings are no more special or unique than the oak or the capibara or flatworm or dinoflagellate. This assumption, I think, is on great display with Chomsky’s academic work on the concept of the LAD, or language acquisition device.

            Chomsky gets a huge amount of credit for shaking up linguistics, but what we don’t often talk about, is how effectively, his entire academic career got relinquished to the dust bin, or at least is now in that pile of papers where we’re not sure if we should “save or throw away”. Specifically, much of Chomsky’s work was predicted on the identification of something in humans which would be called a language acquisition device or LAD. And that this LAD would be found in as a region in human brains and would explain how humans gain language. And just very quickly notice the overall shape of this argument. It’s as old as the Egyptians in at least trying to find the “seat of the soul”, and follows through abrahamism as well. What LLMs did that basically shattered this nothing was show at least one case where no special device was necessary to acquire language; where in fact no human components at all were necessary other than a large corpus of training data; that maybe languages and the very idea of language or language acquisition are not special or unique to humans. LLMs don’t specifically address the issue of a LAD, but they go a step farther in not needing to. Chomsky spent the last of verbal days effectively defending this wrong notion he had (which has already been addressed in neuroscience and linguistics literature), which is an interesting and bitter irony for a linguist, specifically against LLMs.

            To make the point more directly, we lack a good coherent testable definition of human intelligence, which makes any comparisons to machine intelligence somewhat arbitrary and contrived, often to support the interlocutors assumptions. Machine intelligence may get dismissed as statistical inference, sure, but then why can you remember things sometimes but not others? Why do you perform better when you are well rested and well fed versus tired and hungry, if not for there being an underlying distribution of neurons, some of which are ready to go, and some of which are a bit spent and maybe need a nap?

            And so I would advocate caution about investing heavily into a conversation where these assumptions are being made. It’s probably not going to be a satisfying conversation because almost assuredly they assumptee hasn’t dove very deeply into these matters. And look at the downvote ratio. It’s rampant on Lemmy. Lemmy’s very much victim to it’s pack mentality and dog piling nature.

          • Ageroth@reddthat.com
            link
            fedilink
            English
            arrow-up
            21
            arrow-down
            6
            ·
            edit-2
            2 days ago

            Funny to me how defensive you got so quick, accusing of not reading the linked paper before even reading it yourself.

            The reason OP was so rude is that your very premise of “what is the brain doing if not statistical text prediction” is completely wrong and you don’t even consider it could be. You cite a TV show as a source of how it might be. Your concept of what artificial intelligence is comes from media and not science, and is not founded in reality.

            The brain uses words to describe thoughts, the words are not actually the thoughts themselves.

            https://advances.massgeneral.org/neuro/journal.aspx?id=1096

            Think about small children who haven’t learned language yet, do those brains still do “stastical text prediction” despite not having words to predict?

            What about dogs and cats and other “less intelligent” creatures, they don’t use any words but we still can teach them to understand ideas. You don’t need to utter a single word, not even a sound, to train a dog to sit. Are they doing “statistical text prediction” ?

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              16
              ·
              edit-2
              2 days ago

              Read other replies I gave on your same subject. I don’t want to repeat myself.

              But words DO define thoughts, and I gave several examples. Some of them with kids. Precisely in kids you can see how language precedes actual thoughts. I will repeat myself a little here but you can clearly see how kids repeat a lot phrases that they just dont understand, just because their beautiful plastic brains heard the same phrase in the same context.

              Dogs and cats are not proven to be conscious as a human being is. Precisely due the lack of an articulate language. Or maybe not just language but articulated thoughts. I think there may be a trend to humanize animals, mostly to give them more rights (even I think that a dog doesn’t need to have a intelligent consciousness for it to be bad to hit a dog), but I’m highly doubtful that dogs could develop a chain of thoughts that affects itself without external inputs, that seems a pretty important part of the consciousness experience.

              The article you link is highly irrelevant (did you read it? Because I am also accusing you of not reading it as just being result of a quick google to try point your point using a fallacy of authority). The fact that spoken words are created by the brain (duh! Obviously, I don’t even know why the how the brain creates an articulated spoken word is even relevant here) does not imply that the brain does not also take form due to the words that it learns.

              Giving an easier to understand example. For a classical printing press to print books, the words of those books needed to be loaded before in the press. And the press will only be able to print the letters that had been loaded into it.

              the user I replied not also had read the article but they kindly summarize it to me. I will still read it. But its arguments on the impossibility of current LLM architectures to create consciousness are actually pretty good, and had actually put me on the way of being convinced of that. At least by the limitations spoken by the article.

              • Ageroth@reddthat.com
                link
                fedilink
                English
                arrow-up
                13
                arrow-down
                3
                ·
                2 days ago

                Your analogy to mechanical systems are exactly where the breakdown to comparison with the human brain occurs, our brains are not like that, we don’t only have the blocks of text loaded into us, sure we only learn what we get exposed to but that doesn’t mean we can’t think of things we haven’t learned about.
                The article I linked talks about the separation between the formation of thoughts and those thoughts being translated into words for linguistics.

                The fact that you “don’t even know why the how the brain creates an articulated spoken word is even relevant here” speaks volumes to how much you understand the human brain, particularly in the context of artificial intelligence actually understanding the words it generates and the implications of thoughts behind the words and not just guessing which word comes next based on other words, the meanings of which are irrelevant.

                I can listen to a song long enough to learn the words, that doesn’t mean I know what the song is about.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  4
                  ·
                  2 days ago

                  but that doesn’t mean we can’t think of things we haven’t learned about.

                  Can you think of a colour have you never seen? Could you imagine the colour green if you had never seen it?

                  The creative process is more modification than creation. taking some inputs, mixing them with other inputs and having an output that has parts of all out inputs, does it sound familiar? But without those input seems impossible to create an output.

                  And thus the importance of language in an actual intelligent consciousness. Without language the brain could only do direct modifications of the natural inputs, of external inputs. But with language the brain can take an external input, then transform it into a “language output” and immediately take that “language output” and read it as an input, process it, and go on. I think that’s the core concept that makes humans different from any other species, this middle thing that we can use to dialogue with ourselves and push our minds further. Not every human may have a constant inner monologue, but every human is capable to talking to themself, and will probably do when making a decision. Without language (language could take many forms, not just spoken language, but the more complex feels like it would be the better) I don’t know how this self influence process could take place.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            2 days ago

            It’s a basic argument of generative complexity, I found the article some years ago while trying to find an earlier one (I don’t think by the same author) that argued along the same complexity lines, essentially saying that if we worked like AI folks think we do we’d need to be so and so much trillion parameters and our brains would be the size of planets. That article talked about the need for context switching in generating (we don’t have access to our cooking skills while playing sportsball), this article talks about the necessity to be able to learn how to learn. Not just at the “adjust learning rate” level, but mechanisms that change the resulting coding, thereby creating different such contexts, or at least that’s where I see the connection between those two. In essence: To get to AGI we need AIs which can develop their own topology.

            As to “rudeness”: Make sure to never visit the Netherlands. Usually how this goes is that I link the article and the AI faithful I pointed it out to goes on a denial spree… because if they a) are actually into the topic, not just bystanders and b) did not have some psychological need to believe (including “my retirement savings are in AI stock”) they c) would’ve come across the general argument themselves during their technological research. Or came up with it themselves, I’ve also seen examples of that: If you have a good intuition about complexity (and many programmers do) it’s not unlikely a shower thought to have. Not as fleshed out as in the article, of course.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              8
              ·
              edit-2
              2 days ago

              That seems a very reasonable approach on the impossibility to achieve AGI with current models…

              The first concept I was already kind of thinking about. Current LLM are incredibly inefficient. And it seems to be some theoretical barrier in efficiency that no model has been able to surpass. Giving that same answer that with the current model they would probably need to have trillions of parameters just to stop hallucinating. Not to say that to give them the ability to do more things that just answering question. As this supposedly AGI, even if only worked with word, it would need to be able to do more “types of conversations” that just being the answerer in a question-answer dialog.

              But I had not thought of the need of repurpose the same are of the brain (biological or artificial) for doing different task on the go, if I have understood correctly. And it seems pretty clear that current models are unable to do that.

              Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

              Getting a little poetical. I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                2 days ago

                Though I still think that an intelligent consciousness could emerge from a loop of generative “thoughts”, the most important of those probably being language.

                Does a dog have the Buddha nature?

                …meaning to say: Just because you happen to have the habit of identifying your consciousness with language (that’s TBH where the “stuck in your head” thing came from) doesn’t mean that language is necessary, or even a component of, consciousness, instead of merely an object of consciousness. And neither is consciousness necessary to do many things, e.g. I’m perfectly able to stop at a pedestrian light while lost in thought.

                I don’t think that the phrase is “I think therefore I am”, but “I can think ‘I think therefore I am’ therefore I am”.

                What Descartes actually was getting at is “I can’t doubt that I doubt, therefore, at least my doubt exists”. He had a bit of an existential crisis. Unsolicited Advice has a video about it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  5
                  ·
                  2 days ago

                  It may be because of the habit.

                  But when I think of how to define a consciousness and divert it from instinct or reactiveness (like stopping at a red light). I think that something that makes a conscience a conscience must be that a conscience is able to modify itself without external influence.

                  A dog may be able to fully react and learn how to react with the exterior. But can it modify itself the way human brain can?

                  A human being can sit alone in a room and start processing information by itself in a loop and completely change that flux of information onto something different, even changing the brain in the process.

                  For this to happen I think some form of language, some form of “speak to yourself” is needed. Some way for the brain to generate an output that can be immediately be taken as input.

                  At this point of course this is far more philosophical than technical. And maybe even semantics of “what is a conscience”.

      • SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        2 days ago

        Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine

        Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          18
          ·
          edit-2
          2 days ago

          Yep, of course. We do more things.

          But language is a big thing in the human intelligence and consciousness.

          I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            4
            ·
            2 days ago

            language is a big thing in the human intelligence and consciousness.

            But an LLM isn’t actually language. It’s numbers that represent tokens that build words. It doesn’t have the concept of a table, just the numerical weighting of other tokens related to “tab” & “le”.

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              10
              ·
              edit-2
              2 days ago

              I don’t know how to tell you this. But your brain does not have words imprinted in it…

              The concept of this is, funnily enough, something that is being studied that derived from language. For instance ancient greeks did not distinguish between green and blue, as both colours had the same word.

              • Knock_Knock_Lemmy_In@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 days ago

                You said

                your brain does not have words imprinted in it…

                You also said

                language is a big thing in the human intelligence and consciousness.

                You need to pick an argument and stick to it.

                • daniskarma@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  2 days ago

                  what do you not understand?

                  Words are not imprinted, they are a series of electrical impulses that we learn over time. As a reference about the complain that LLM does not have words but tokens that represent values within the network.

                  And those impulses and how we generate them while we think are of great importance on out consciousness.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        2 days ago

        What is your brain doing if not statistical text prediction?

        Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          13
          ·
          edit-2
          2 days ago

          Then why you just expressed in a statistical prediction manner?

          You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

          Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

          If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

          • mynameisigglepiggle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            But people who agree with my political ideology are considered and intelligent. People who disagree with me are stupider than chatgpt 3.5 and just say the same shit and can’t be reasoned with.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          Church?

          Free will vs determinism doesn’t have to do with religion.

          I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.

          Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.

          But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?

  • adarza@lemmy.ca
    link
    fedilink
    English
    arrow-up
    306
    ·
    2 days ago

    AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

    nothing to do with actual capabilities… just the ability to make piles and piles of money.

    • LostXOR@fedia.io
      link
      fedilink
      arrow-up
      46
      arrow-down
      2
      ·
      2 days ago

      Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

      • hemmes@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        44
        ·
        2 days ago

        AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.

        • lazynooblet@lazysoci.al
          link
          fedilink
          English
          arrow-up
          64
          arrow-down
          2
          ·
          2 days ago

          The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.

          LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.

          • Benjaben@lemmy.world
            link
            fedilink
            English
            arrow-up
            19
            arrow-down
            1
            ·
            2 days ago

            Not the person you replied to, but I think you’re both “right”. The ridiculous hype bubble (I’ll call it that for sure) put “AI” everywhere, and most of those are useless gimmicks.

            But there’s also already uses that offer things I’d call novel and useful enough to have some staying power, which also means they’ll be iterated on and improved to whatever degree there is useful stuff there.

            (And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              20
              ·
              2 days ago

              I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn’t mean AI isn’t useful just that most of the chaf well disapear.

              • kbal@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                5
                ·
                2 days ago

                The dotcom bubble was based on technology that had already been around for ten years. The AI bubble is based on technology that doesn’t exist yet.

            • anomnom@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              Yeah, it’s so a question of if OpenAI won’t lose too many of its investors when all the users that don’t stick fall down.

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            9
            ·
            2 days ago

            To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.

        • Alphane Moon@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          2 days ago

          To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.

          LLMs clearly have utility, but is it enough to turn them into a profitable business line?

          • hemmes@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            14
            ·
            2 days ago

            You’re right about the definition, and I do think the LLMs will aid in a product offering’s profitability, if not directly generate profits. But OP didn’t mean economically, they meant LLMs will go the way of slap bracelets.

            • frezik@midwest.social
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              1
              ·
              2 days ago

              … before this whole AI bubble collapses and their value plummets.

              Sounds like they meant economics to me.

              • hemmes@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                9
                ·
                2 days ago

                They said “AI bubble collapses” first then “their value” - meaning the product’s practical use stops functioning (people stop using it) first thus causing economic breakdown for the companies as a result.

                It’s obvious that the OP is expecting LLMs to be a fad that people will soon be forgetting.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          2 days ago

          It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.

          • suy@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            I think that “exactly like” it’s absurd. Bubbles are never “exactly” like the previous ones.

            I think in this case there is a clear economical value in what they produce (from the POV of capitalism, not humanity’s best interests), but the cost is absurdly huge to be economically viable, hence, it is a bubble. But in the dot com bubble, many companies had a very dubious value in the first place.

            • skulblaka@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              there is a clear economical value in what they produce

              There is clear economic value in chains of bullshit that may or may not ever have a correct answer?

              • suy@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                OpenAI doesn’t produce LLMs only. People are gonna be paying for stuff like Sora or DallE. And people are also paying for LLMs (e.g. Copilot, or whatever advanced stuff OpenAI offers in their paid plan).

                How many, and how much? I don’t know, and I am not sure it can ever be profitable, but just reducing it to “chains of bullshit” to justify that it has no value to the masses seems insincere to me. ChatGPT gained a lot of users in record time, and we know is used a lot (often more than it should, of course). Someone is clearly seeing value in it, and it doesn’t matter if you and I disagree with them on that value.

                I still facepalm when I see so many people paying for fucking Twitter blue, but the fact is that they are paying.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      2 days ago

      The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 days ago

        Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          2 days ago

          honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn’t have anything to do with “AGI”. For context, Apple’s net income is 90 Billion this year.

          I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

    • Harvey656@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      For fun, I used this idea in chatgpt using similar logic:

      Hello, can you explain to me how a strict profit generated number would result in a specific goal, such as if I make 1000 dollars I would suddenly be capable of owning a house. I find the idea strange and backwards.

      You’re correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here’s why this thinking is flawed:

      1. Profit vs. Net Worth: Making $1,000 in profit doesn’t directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.

      2. Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.

      3. Leverage and Financing: Most people don’t buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won’t improve these factors significantly.

      4. Specificity of the Goal: “Making $1,000” is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.

      5. The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.

      Example of Goal-Oriented Thinking:

      If you wanted to own a $200,000 house:

      You might need a 10% down payment ($20,000).

      Closing costs could add $5,000–$10,000.

      Monthly income would need to support a mortgage payment (let’s say $1,200/month).

      Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.

      Summary:

      Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.

  • Free_Opinions@feddit.uk
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    5
    ·
    2 days ago

    We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

    • rational_lib@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.

      Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

    • LifeInMultipleChoice@lemmy.ml
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

        • NeverNudeNo13@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          On the same hand… “Fluently translate this email into 10 random and discrete languages” is a task that 99.999% of humans would fail that a language model should be able to hit.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            21 hours ago

            Agree. That’s a super useful thing LLMs can do. I’m still waiting for Mozilla to integrate Japanese and a few other (distant to me) languages into my browser. And it’s a huge step up from Google translate. It can do (to a degree) proverbs, nuance, tone… There are a few things AI or machine learning can do very well. And outperform any human by a decent margin.

            On the other hand, we’re talking about general intelligence here. And translating is just one niche task. By definition that’s narrow intelligence. But indeed very useful to have, and I hope this will connect people and broaden their (and my) horizon.

        • Free_Opinions@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

          This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            You’re - of course - right. Though I’m always a bit unsure about exactly that. We also don’t attribute intelligence to books. For example an encyclopedia, or Wikipedia… That has a lot of knowledge stored, yet it is not intelligent. That makes me believe being intelligent has something to do with being able to apply knowledge, and do something with it. And outputting text is just one very limited form of interacting with the world.

            And since we’re using humans as a benchmark for the “general” part in AGI… Humans have several senses, they’re able to interact with their environment in lots of ways, and 90% of that isn’t drawing and communicating with words. That makes me wonder: Where exactly is the boundary between an encyclopedia and an intelligent entity… Is intelligence a useful metric if we exclude being able to do anything useful with it? And how much do we exclude by not factoring in parts of the environment/world?

            And is there a difference between being book-smart and intelligent? Because LLMs certainly get all of their information second-hand and filtered in some way. They can’t really see the world itself, smell it, touch it and manipulate something and observe the consequences… They only get a textual description of what someone did and put into words in some book or text on the internet. Is that a minor or major limitation, and do we know for sure this doesn’t matter?

            (Plus, I think we need to get “hallucinations” under control. That’s also not 100% “intelligence”, but it also cuts into actual use if that intelligence isn’t reliably there.)

    • zeca@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        But we know too little about whether the limits of the turing machine are also limits of human cognition.

        Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

        The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.

        • zeca@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 hours ago

          No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense). Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            18 hours ago

            Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

            Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is “if incomputable inputs are allowed, then we can compute the incomputable”. It should be called reasoning modulo oracles.

            Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?

            • zeca@lemmy.eco.br
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 hours ago

              You say an incompleteness theorem implies that brains are computable? Then you consider the possibility of them being hypercomputers? What is this?

              Im not saying brains are hypercomputers, just that we dont know if thats the case. If you think that would be “supernatural”, ok, i dont mind. And i dont object to the possibility of eventually having AI on hypercomputers. All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.

              • barsoap@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                15 hours ago

                You say an incompleteness theorem implies that brains are computable?

                No, I’m saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.

                All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.

                Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I’m not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.

                Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain… it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of “artificial”). That is why I say the “whether” is not the question: We know it is possible. We’ve in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.

                • zeca@lemmy.eco.br
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  13 hours ago

                  No,

                  Ok. So nothing you said backs the claim that “logic” implies that the brain cannot be using some uncomputable physical phenomenon, and so be uncomputable.

                  I’m not sure about what you mean by “cause and effect” existing. Does it mean that the universe follows a set of laws? If cause and effect exists, the disjunction you said is implied by the incompleteness theorem entails that there are uncomputable functions, which I take to mean that there are uncomputable oracles in the physical world. But i still find suspicious your use of incompleteness. We take the set of laws governing the universe and turn it into a formal system. How? Does the resulting formal system really meet all conditions of the incompleteness theorem? Expressivity is just one of many conditions. Even then, the incompleteness theorem says we can’t effectively axiomatize the system… so what?

                  Adequate in which sense?

                  I dont mean just architecturally, the turing machine wouldnt be adequate to model the brain in the sense that the brain, in that hypothetical scenario, would be a hypercomputer, and so by definition could not be simulated by a turing machine. As simple as that. My statement there was almost a tautology.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

        I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

        I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      2 days ago

      That’s kind of too broad, though. It’s too generic of a description.

      • Entropywins@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        2 days ago

        The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        That’s the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.

  • frezik@midwest.social
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    2
    ·
    2 days ago

    We taught sand to do math

    And now we’re teaching it to dream

    All the stupid fucks can think to do with it

    Is sell more cars

  • ChowJeeBai@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    2 days ago

    This is just so they can announce at some point in the future that they’ve achieved AGI to the tune of billions in the stock market.

    Except that it isn’t AGI.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      2 days ago

      But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved

      The real motivation is to not be beholden to Microsoft

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Also, maybe in a world where you measure anyone with money it makes sense to measure intelligence with money ¯\_(ツ)_/¯

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…

    Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.

    Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      2 days ago

      Because OpenAI is anything but open. And they make money selling the idea of AI without actually having AI.

    • mint_tamas@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      Because they don’t have all the things they claim to claim to have, or it’s with significant caveats. These things are publicised to fuel the hype which attracts investor money. Pretty much the only way they can generate money, since running the business is unsustainable and the next gen hardware did not magically solve this problem.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      2 days ago

      They don’t have AGI. AGI also won’t happen for another laege amount of years to come

      What they currently have is a bunch of very powerful statistical probability engines that can predict the next word or pixel. That’s it.

      AGI is a completely different beast to the current LLM flower leaves

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        edit-2
        2 days ago

        You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.

        Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    8
    ·
    edit-2
    2 days ago

    That’s not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It’s reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.

    • brie@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      2 days ago

      Good observation. Could it be that Microsoft lowers profits by including unnecessary investments like acquisitions?

      So it’d take a 100M users to sign up for the $200/mo plan. All it’d take is for the US government to issue vouchers for video generators to encourage everyone to become a YouTuber instead of being unemployed.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Maybe it’ll be able to come up with coherent naming conventions for their products. That would be revolutionary

        • brie@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          AI is already running all software companies as the principle growth philosophy, but that’s like saying that gold used to run Colorado and California in 1800s. The executives have no choice at all but bet all in on AI now.

        • kautau@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          That’s basically Neuromancer, and at this point it seems that big tech companies are reading dystopian cyberpunk literature as next-gen business advice books, so you’re certainly right

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      If they actually achieve AGI I don’t understand what money would even mean anymore. It essentially is just a mechanism for getting people to do things they don’t otherwise want to do, if the AI can do it just as well as the human, but for free other than the electricity costs, why the hell would you pay a human to do it?

      It’s like saving up money, in case of nuclear war. There are a few particular moments in history where the state of the world on the far side of the event is so different to the world on this side of the event that there’s no point making any kind of plans based on today systems.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        edit-2
        2 days ago

        I see what you’re saying and I agree that if, for example, we get an AI god then money won’t be useful. However, that’s not the only possible near-future outcome and if the world as we know it doesn’t end then money can be used by AIs to get other AIs to do something they don’t otherwise want to do.

        • Aussiemandeus@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          There are jobs that require hands still.

          AGI hell regular “Ai” (LLM) trained on all the automotive repair books should be able to diagnose a fault but it still needs a human to go repair the vehicle.

          On board diagnostics are smart, they can tell you the rear tail lights are in open circuit etc. What they can’t tell is the back half of the car was ripped off by a train and a set of bulbs just won’t cut it

        • qprimed@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          hence the worldcoin stuff - not just machine to machine. allows “ai” to perform real world action through human incentivization. entirely disturbing if you ask me.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          My point is if AI takes over all of the work there won’t be any jobs for humans. So they won’t have any money.

          So who are all the AI companies going to sell their products to? The whole system doesn’t work in an AI future and we don’t need AI gods to be able to do our jobs, after all most humans are idiots.

          Also AI doesn’t need motivation.

          • ArbitraryValue@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 days ago

            Trade (facilitated by money) doesn’t require humans. It just requires multiple agents and positive-sum interactions. Imagine a company, run by an AI, which makes robots. It sells those robots to another company, also run by an AI, which mines metal (the robots do the mining). The robots are made from metal the first company buys from the second one. The first AI gets to make more robots than it otherwise would, the second AI gets to mine more metal than it otherwise would, and so both are better off.

            They don’t care that they’re stuck in a loop, the same way humans keep creating new humans to create new humans to create new humans and so forth.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    So they don’t actually have a definition of a AGI they just have a point at which they’re going to announce it regardless of if it actually is AGI or not.

    Great.