• blarghly@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    7 months ago

    When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.

    At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn’t happened yet.

    • haui@lemmy.giftedmc.com
      link
      fedilink
      arrow-up
      5
      arrow-down
      6
      ·
      7 months ago

      Although i agree with the general idea, AI (as in llms) is a pipe dream. Its a non product, another digital product that hypes investors up and produces “value” instead of value.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        7 months ago

        Not true. Not entirely false, but not true.

        Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.

        The problem is that people think LLMs are actual AI. They’re not.

        My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.

        Consider these two facts:

        1. Humans are mammals.
        2. Humans build dams.

        Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.

        Here’s how it goes with Gemini right now:

        Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.

        Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.

        And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.

        • haui@lemmy.giftedmc.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          7 months ago

          We’ll have to agree to disagree then. My hype argument perfectly matches your point of people wrongly perceiving llms as ai but my point goes further.

          AI is a search engine on steroids with all the drawbacks. It produces no more accurate results, has no more information, does not do anything else but take the research effort away which is proven to make people dumber. More importantly, llms gobble up energy like crazy and need rare ressources which are taken from exploited countries. In addition to that, they are a privacy nightmare and proven to systematically harm small creators due to breach of intellectual property, which is especially brutal for them.

          So no, there is no redeeming qualities in llms in their current form. They should be outlawed immediately and at most, locally used in specific cases.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          The biggest problem with LLMs as most currently use them is their inability to mull things over. To have multiple trains of thought and then try to intersect them and fork compilations of thought. When you ask a question it has exactly one chance to think up a response and no chance to review that thought or reconsider it. There are models that are allowed to do this but they’re generally behind pay walls because even the simplest of questions can lead into ridiculous tangents without proper guidelines on the prompt. Here the ‘Advanced resoning’ models response to the same question

          Mammals known to build dams

          #Mammal (scientific name)Dam-building habitKey reference1North American beaver (Castor canadensis)Constructs multi-year stick-and-mud dams on streams and ditches to flood an area deep enough for its lodge and food cache.2Eurasian beaver (Castor fiber)Same engineering instinct as its North-American cousin; creates extensive pond systems across Europe and parts of Asia.3Humans (Homo sapiens)From earthen farm ponds to megaprojects such as Hoover Dam, people build dams for water storage, flood control, power and more.

          Why the list is so short

          Beavers are unique. Despite a variety of lodge-building or burrowing rodents (muskrats, nutria, water voles, rakali, etc.), none of them actually dam a watercourse; they rely on natural water levels or on beaver-made ponds.

          No other living mammal species has been documented creating intentional water-blocking structures. (The extinct giant beaver Castoroides probably did not dam rivers, according to paleontological evidence. )

          So, when it comes to true dam-building in the mammal world, it’s essentially a two-species beaver monopoly—plus us.

          https://chatgpt.com/share/683caddc-5944-8009-8e4a-d03bef5933a4

          Also note that this response took a considerable amount more time than a standard response because it keeps reviewing it’s responses. But it’s worthwhile watching it’s thought process as it builds your answers.

    • Clinicallydepressedpoochie@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      7 months ago

      I do expect advancement to hit a period of exponential growth that quickly surpasses human intelligence. Given it adapts the drive to autonmously advance. Whether that is possible is yet to be seen and that’s kinda my point.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Here are all 27 U.S. states whose names contain the letter “o”:

          Arizona

          California

          Colorado

          Connecticut

          Florida

          Georgia

          Idaho

          Illinois

          Iowa

          Louisiana

          Minnesota

          Missouri

          Montana

          New Mexico

          New York

          North Carolina

          North Dakota

          Ohio

          Oklahoma

          Oregon

          Rhode Island

          South Carolina

          South Dakota

          Vermont

          Washington

          Wisconsin

          Wyoming

          (That’s 27 states in total.)

          What’s missing?

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          7 months ago

          No “they” haven’t unless you can cite your source. Chatgpt was only released 2.5 years ago and even openai was saying 5-10 years with most outside watchers saying 10-15 with real nay sayers going out to 25 or more

  • NegentropicBoy@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    In the spirit of showerthoughts: I feel the typical LLM is reaching a plateau. The “reasoning” type was a big advance though.

    Companies are putting a lot of effort on how to handle the big influx of AI requests.

    With the huge resources, both academic and operational, going into AI we should expect unexpected jumps in power :)

  • Ex Nummis@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    7 months ago

    It’s not anytime soon. It can get like 90% of the way there but those final 10% are the real bitch.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      edit-2
      7 months ago

      The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.

      Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.

      • Monstrosity@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        7 months ago

        This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.

        But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.

        • Coldcell@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            It’s a bit of “emergent properties” - so many things are happening under the hood they don’t understand exactly how it’s doing what it’s doing, why one type of mesh performs better on a particular class of problems than another.

            The equations of the Lorenz attractor are simple, well studied, but it’s output is less than predictable and even those who study it are at a loss to explain “where it’s going to go next” with any precision.

          • The_Decryptor@aussie.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            7 months ago

            Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.

            It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.

            • Monstrosity@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              7 months ago

              What a load of horseshit lol

              EDIT: Sorry, I’ll expand. When AI researchers give talks about how AI works, they say things like, “on a fundamental level, we don’t actually know what’s going on.”

              Also, even if there are books available about how to write an AI from scratch(?) somehow, the basic understanding of what happens deep within the neural networks is still a “magic black box”. They’ll crack it open eventually, but not yet.

              The ideas that people have that AI is simple and stupid & a passing fad are naive.

              • The_Decryptor@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                If these AI researchers really have no idea how these things work, then how can they possibly improve the models or techniques?

                Like how they now claim all that after upgrades that now these LLMs can “reason” about problems, how did they actually go and add that if it’s a black box?

          • Ex Nummis@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            7 months ago

            Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.

            • Coldcell@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              arrow-down
              3
              ·
              7 months ago

              Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                edit-2
                7 months ago

                The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          Steam locomotive operators would notice some behaviors of their machines that they couldn’t entirely explain. They were out there, shoveling the coal, filling the boilers, and turning the valves but some aspects of how the engines performed - why they would run stronger in some circumstances than others - were a mystery to the men on the front lines. Decades later, intense theoretical study could explain most of the observed phenomena by things like local boiling inside the boiler insulating the surface against heat transfer from the firebox, etc. but at the time when the tech was new: it was just a mystery.

          Most of the “mysteries” of AI are similarly due to the fact that the operators are “vibe coding” - they go through the motions and they see what comes out. They’re focused on their objectives, the input-output transform, and most of them aren’t too caught up in the how and why of what it is doing.

          People will study the how and why, but like any new tech, their understanding is going to lag behind the actions of the doers who are out there breaking new ground.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        7 months ago

        Distill intelligence - what is it, really? Predicting what comes next based on… patterns. Patterns you learn in life, from experience, from books, from genetic memories, but that’s all your intelligence is too: pattern recognition / prediction.

        As massive as current AI systems are, consider that you have ~86 Billion neurons in your head, devices that evolved over the span of billions of years ultimately enabling you to survive in a competitive world with trillions of other living creatures, eating without being eaten at least long enough to reproduce, back and back and back for millions of generations.

        Current AI is a bunch of highly simplified computers with up to hundreds of thousands of cores. Like planes fly faster than birds, AI can do some tricks better than human brains, but mostly: not.

  • Aatube@kbin.melroy.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    9
    ·
    7 months ago

    ask the nearest school where nearly all students have smartphones whether it has taken off

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            7 months ago

            People have been cheating on their homework as long as homework has existed. AI is just the latest method to do so. It’s easier to cheat with than previous methods, but that’s been true for every new method of cheating.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              It’s easier to cheat with than previous methods

              Even that isn’t true. What we have is a decay in the tools we used to cheat with. AI isn’t better, its just crowding out the alternatives.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              not essays

              Yes, essays. Pre-written essays on subjects that you could plagiarize line for line.

              not free help like this

              Yes, free help like this, on message boards and blogs and YouTube channels and chat groups. Very likely better help, too, since you’re getting the information from subject matter experts rather than some random amalgamation of text shoved into a language model output template.

              • Aatube@kbin.melroy.org
                link
                fedilink
                arrow-up
                2
                ·
                7 months ago

                Pre-written essays on subjects that you could plagiarize line for line.

                In my time these were absolutely awful and vacuous circumlocution. Not to mention TurnItIn.

                Yes, free help like this, on message boards and blogs and YouTube channels and chat groups.

                You have to pay for the messageboards that are actually useful and already have your question like Chegg. The only comparable free platform I used only worked on Chinese homework. Anything else you’d have to post your question and wait about an hour. ChatGPT takes one minute. And its quality today is way better than you think. You can just take a photo of homework and it’d give you the right answers 90% of the time and with explanations.

                • UnderpantsWeevil@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  You have to pay for the messageboards that are actually useful and already have your question like Chegg.

                  A pittance, compared to what you were paying to attend the courses themselves.

                  And its quality today is way better than you think. You can just take a photo of homework and it’d give you the right answers 90% of the time and with explanations.

                  For freshman engineering problems, maybe. Even then, the longer and more complicated the problem, the more the success rate drops off. It’s not much of a cheat when it only gets you the easy answers. Those aren’t the questions that eat up all your time.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore’s law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn’t necessarily need to feel fast. It’s always growing at the same rate percentage wise, definitionally.

    • Inucune@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      We once again congratulate software engineers for nullifying 40 years of hardware improvements.

    • cabb@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Moore’s law is kinda still in effect, depending on your definition of Moore’s law. However, Dennard Scaling is not so computer performance isn’t advancing like it used to.

      • utopiah@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Moore’s law is kinda still in effect, depending on your definition of Moore’s law.

        Sounds like the goal post is moving faster than the number of transistors in an integrated circuit.

  • Mose13@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    7 months ago

    It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything

  • utopiah@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    LOL… you did make me chuckle.

    Aren’t we 18months until developers get replaced by AI… for like few years now?

    Of course “AI” even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It’s a marketing term to keep on fueling the hype.

    That’s despite so much resources, namely R&D and data centers, being poured in… and yet there is not “GPT5” or anything that most people use on a daily basis for anything “productive” except unreliable summarization or STT (which both had plenty of tools for decades).

    So… yeah, it’s a slow take off, as expected. shrug

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    7 months ago

    A major bottleneck is power capacity. Is is very difficult to find 50Mwatts+ (sometime hundreds) of capacity available at any site. It has to be built out. That involves a lot of red tape, government contracts, large transformers, contractors, etc. the current backlog on new transformers at that scale is years. Even Google and Microsoft can’t build, so they come to my company for infrastructure - as we already have 400MW in use and triple that already on contract. Further, Nvidia only makes so many chips a month. You can’t install them faster than they make them.

      • themurphy@lemmy.ml
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        7 months ago

        And it’s pretty great at it.

        AI’s greatest use case is not LLM and people treat it like that because it’s the only thing we can relate to.

        AI is so much better and many other tasks.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        7 months ago

        Maybe we are statistical engines too.

        When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.

        • justOnePersistentKbinPlease@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          Need it to not exponentially degrade when AI content is fed in.

          Need creativity to be more than random chance deviations from the statistically average result in a mostly stolen dataset taken from actual humans.

      • all_i_see@lemy.lol
        link
        fedilink
        arrow-up
        24
        arrow-down
        20
        ·
        7 months ago

        Humans don’t actually think either, we’re just electricity jumping to nearby neural connections that formed based on repeated association. Add to that there’s no free will, and you start to see how “think” is a immeasurable metric.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    7 months ago

    We humans always underestimate the time it actually takes for a tech to change the world. We should travel in self-flying flying cars and on hoverboards already but we’re not.

    The disseminators of so-called AI have a vested interest in making it seem it’s the magical solution to all our problems. The tech press seems to have had a good swig from the koolaid as well overall. We have such a warped perception of new tech, we always see it as magical beans. The internet will democratize the world - hasn’t happened; I think we’ve regressed actually as a planet. Fully self-drving cars will happen by 2020 - looks at calendar. Blockchain will revolutionize everything - it really only provided a way for fraudsters, ransomware dicks, and drug dealers to get paid. Now it’s so-called AI.

    I think the history books will at some point summarize the introduction of so-called AI as OpenAI taking a gamble with half-baked tech, provoking its panicked competitors into a half-baked game of oneupmanship. We arrived at the plateau in the hockey stick graph in record time burning an incredible amount of resources, both fiscal and earthly. Despite massive influences on the labor market and creative industries, it turned out to be a fart in the wind because skynet happened a 100 years later. I’m guessing 100 so it’s probably much later.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      AI has been advancing exponentially, it’s just a very small exponent.

      In the 1980s, it was “five years out” - and it more or less has been that until the past 5-10 years. It’s moving much faster now, but still much slower than people expect.

      They think because they saw HAL in the 2001 movie back in 1968, that should have been reality by the 1970s, or certainly by 2010.

      Some things move faster than people expect, like the death of newspapers and the first class letter, but most move slower.

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    7 months ago

    I think we might not be seeing all the advancements as they are made.

    Google just showed off AI video with sound. You can use it if you subscribe to thier $250/month plan. That is quite expensive.

    But if you have strong enough hardware, you can generate your own without sound.

    I think that is a pretty huge advancement in the past year or so.

    I think that focus is being put on optimizing these current things and making small improvements to quality.

    Just give it a few years and you will not even need your webcam to be on. You could just use an AI avatar that look and sounds just like you running locally on your own computer. You could just type what you want to say or pass through audio. I think the tech to do this kind of stuff is basically there, it just needs to be refined and optimized. Computers in the coming years will offer more and more power to let you run this stuff.

    • JeremyHuntQW12@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      7 months ago

      How is that an advance ? Computers have been able to speak since the 1970s. It was already producing text.

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    How do you know it hasn’t and us just laying low? I for one welcome our benevolent and merciful machine overlord.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 months ago

    Iirc there are mathematical reason why AI can’t actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we’re already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.