• chunes@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    7
    ·
    5 months ago

    Laugh it up while you can.

    We’re in the “haha it can’t draw hands!” phase of coding.

    • Soleos@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      5 months ago

      AI bad. But also, video AI started with will Will Smith eating spaghetti just a couple years ago.

      We keep talking about AI doing complex tasks right now and it’s limitations, then extrapolating its development linearly. It’s not linear and it’s not in one direction. It’s a exponential and rhizomatic process. Humans always over-estimate (ignoring hard limits) and under-estimate (thinking linearly) how these things go. With rocketships, with internet/social media, and now with AI.

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      5 months ago

      someone drank the koolaid.

      LLMs will never code for two reasons.

      one, because they only regurgitate facsimiles of code. this is because the models are trained to ingest content and provide an interpretation of the collection of their content.

      software development is more than that and requires strategic thought and conceptualization, both of which are decades away from AI at best.

      two, because the prevalence of LLM generated code is destroying the training data used to build models. think of it like making a copy of a copy of a copy, et cetera.

      the more popular it becomes the worse the training data becomes. the worse the training data becomes the weaker the model. the weaker the model, the less likely it will see any real use.

      so yeah. we’re about 100 years from the whole “it can’t draw its hands” stage because it doesn’t even know what hands are.

      • chunes@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        edit-2
        4 months ago

        This is just your ego talking. You can’t stand the idea that a computer could be better than you at something you devoted your life to. You’re not special. Coding is not special. It happened to artists, chess players, etc. It’ll happen to us too.

        I’ll listen to experts who study the topic over an internet rando. AI model capabilities as yet show no signs of slowing their exponential growth.

        • GreenKnight23@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          3
          ·
          4 months ago

          you’re a fool. chess has rules and is boxed into those rules. of course it’s prime for AI.

          art is subjective, I don’t see the appeal personally, but I’m more of a baroque or renaissance fan.

          I doubt you will but if you believe in what you say then this will only prove you right and me wrong.

          what is this?

          1000001583

          once you classify it, why did you classify it that way? is it because you personally have one? did you have to rule out what it isn’t before you could identify what it could be? did you compare it to other instances of similar subjects?

          now, try to classify it as someone who doesn’t have these. someone who has never seen one before. someone who hasn’t any idea what it could be used for. how would you identify what it is? how it’s used? are there more than one?

          now, how does AI classify it? does it comprehend what it is, even though it lacks a physical body? can it understand what it’s used for? how it feels to have one?

          my point is, AI is at least 100 years away from instinctively knowing what a hand is. I doubt you had to even think about it and your brain automatically identified it as a hand, the most basic and fundamentally important features of being a human.

          if AI cannot even instinctively identify a hand as a hand, it’s not possible for it to write software, because writing is based on human cognition and is entirely driven on instinct.

          like a master sculptor, we carve out the words from the ether to perform tasks that not only are required, but unseen requirements that lay beneath the surface that are only known through nuance. just like the sculptor that has to follow the veins within the marble.

          the AI you know today cannot do that, and frankly the hardware of today can’t even support AI in achieving that goal, and it never will because of people like you promoting a half baked toy as a tool to replace nuanced human skills. only for this toy to poison pill the only training data available, that’s been created through nuanced human skills.

          I’ll just add, I may be an internet rando to you but you and your source are just randos to me. I’m speaking from my personal experience in writing software for over 25 years along with cleaning up all this AI code bullshit for at least two years.

          AI cannot code. AI writes regurgitated facsimiles of software based on it’s limited dataset. it’s impossible for it to make decisions based on human nuance and can only make calculated assumptions based on the available dataset.

          I don’t know how much clearer I have to be at how limited AI is.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          4 months ago

          Coding isn’t special you are right, but it’s a thinking task and LLMs (including reasoning models) don’t know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn’t learn to think from that. That’s why LLMs can’t replace humans.

          That does certainly not mean that software can’t be smarter than humans. It will and it’s just a matter of time, but to get there we likely have AGI first.

          To show you that LLMs can’t think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it “saw” the entire Wikipedia article on how xxo works during training, that it’s a solved game, different strategies and how to consistently draw - but still it can’t do it. It loses most games against my four year old niece and she doesn’t even play good/perfect xxo.

          I wouldn’t trust anything, which is claimed to do thinking tasks, that can’t even beat my niece in xxo, with writing firmware for cars or airplanes.

          LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can’t think. For now, but likely we’ll need different architectures for real thinking models than LLMs have.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          4 months ago

          You know, I’d be interested to know what the critical size you can get to with that approach is before it becomes useless.

          • ByteOnBikes@slrpnk.net
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            4 months ago

            It can become pretty bad quickly, with just a small project with only 15-20 files. I’ve been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.

            And while incredibly impressive how it’s creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It’ll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.

            Then you try to reel it in, and it continues to go rampant. And for me, that’s when I either take the wheel or roll back.

            I highly recommend every programmer watch it in action.

            • Blackmist@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I’d rather recommend every CEO see it in action…

              They’re the ones who would be cock-a-hoop to replace us and our expensive wages with kids and bots.

              When they’re sitting around rocking back and forth and everything is on fire like that Community GIF, they’ll find my consultancy fees to be quite a bit higher than my wages used to be.

            • Aeri@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              I think Generative AI is a genuinely promising and novel tool with real, valuable applications. To appreciate it however, you have to mentally compartmentalize the irresponsible, low-effort ways people sometimes mostly use it—because yeah, it’s very easy to make a lot of that so that’s most of what you see when you hear “Generative AI” and it’s become its reputation…

              Like I’ve had interesting “conversations” with Gemini and ChatGPT, I’ve actually used them to solve problems. But I would never put it in charge of anything critically important that I couldn’t double check against real data if I sensed the faintest hint of a problem.

              I also don’t think it’s ready for primetime. Does it deserve to be researched and innovated upon? Absolutely, but like, by a few nerds who manage to get it running, and universities training it on data they have a license to use. Not “Crammed into every single technology object on earth for no real reason”.

              I have brain not very good sometimes disease and I consider being able to “talk” to a “person” who can get me out of a creative rut just by exploring my own feelings a bit. GPT can actually listen to music which surprised me. I consider it scientifically interesting. It doesn’t get bored or angry at you unless you like, tell it to? I’ve asked it for help with a creative task in the past and not actually used any of its suggestions at all, but being able to talk about it with someone (when a real human who cared was not available) was a valuable resource.

              To be clear I pretty much just use it as a fancy chatbot and don’t like, just copy paste its output like some people do.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              4 months ago

              Is there a chance that’s right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn’t actually have a long term memory of any kind (at least outside of the training phase).

  • Owl@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    5 months ago

    well, it only took 2 years to go from the cursed will smith eating spaghetti video to veo3 which can make completely lifelike videos with audio. so who knows what the future holds

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      5 months ago

      The cursed Will Smith eating spaghetti wasn’t the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it’s not quite as incredible as that viral video would suggest

      • wischi@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        But wouldn’t you point still be true today that the best AI video models today would be the onces that are not available for consumers?

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Probably is still true, but I’ve not been paying close attention to the AI market in the last couple of years. But the point I was trying to make was that it’s an apples to oranges comparison

    • Mose13@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      5 months ago

      Hot take, today’s AI videos are cursed. Bring back will smith spaghetti. Those were the good old days

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      4 months ago

      There actually isn’t really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.

    • zurohki@aussie.zone
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      5 months ago

      It generates an answer that looks correct. Actual correctness is accidental. That’s how you wind up with documents with references that don’t exist, it just knows what references look like.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        edit-2
        5 months ago

        It doesn’t ‘know’ anything. It is glorified text autocomplete.

        The current AI is intelligent like how Hoverboards hover.

            • capybara@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              5 months ago

              You could claim that it knows the pattern of how references are formatted, depending on what you mean by the word know. Therefore, 100% uninteresting discussion of semantics.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                edit-2
                5 months ago

                The theory of knowledge (epistemology) is a distinct and storied area of philosophy, not a debate about semantics.

                There remains to this day strong philosophical debate on how we can be sure we really “know” anything at all, and thought experiments such as the Chinese Room illustrate that “knowing” is far, far more complex than we might believe.

                For instance, is it simply following a set path like a river in a gorge? Is it ever actually “considering” anything, or just doing what it’s told?

                • capybara@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 months ago

                  No one cares about the definition of knowledge to this extent except for philosophers. The person who originally used the word “know” most definitely didn’t give a single shit about the philosophical perspective. Therefore, you shitting yourself a word not being used exactly as you’d like instead of understanding the usage in the context is very much semantics.

        • malin@thelemmy.club
          link
          fedilink
          arrow-up
          5
          arrow-down
          9
          ·
          edit-2
          5 months ago

          This is a philosophical discussion and I doubt you are educated or experienced enough to contribute anything worthwhile to it.

          • ItsMeForRealNow@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Dude… the point is I don’t have to be. I just have to be human and use it. If it sucks, I am gonna say that.

            • malin@thelemmy.club
              link
              fedilink
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              5 months ago

              I can tell you’re a member of the next generation.

              Gonna ignore you now.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                5 months ago

                At first I thought that might be a Pepsi reference, but you are probably too young to know about that.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            5 months ago

            Insulting, but also correct. What “knowing” something even means has a long philosophical history.

            • snooggums@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              5 months ago

              Trying to treat the discussion as a philisophical one is giving more nuance to ‘knowing’ than it deserves. An LLM can spit out a sentence that looks like it knows something, but it is just pattern matching frequency of word associations which is mimicry, not knowledge.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                4 months ago

                I’ll preface by saying I agree that AI doesn’t really “know” anything and is just a randomised Chinese Room. However…

                Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn’t know things, you’re basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can’t just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can’t- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.

                • snooggums@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  4 months ago

                  Acting like the entire history of the philosophy of knowledge is just some attempt make “knowing” seem more nuanced is extremely arrogant.

                  That is not what I said. In fact, it is the opposite of what I said.

                  I said that treating the discussion of LLMs as a philosophical one is giving ‘knowing’ in the discussion of LLMs more nuance than it deserves.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      5 months ago

      llms are systems that output human-readable natural language answers, not true answers

  • LanguageIsCool@lemmy.world
    link
    fedilink
    arrow-up
    44
    ·
    5 months ago

    I’ve heard that a Claude 4 model generating code for an infinite amount of time will eventually simulate a monkey typing out Shakespeare

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      12
      ·
      5 months ago

      It will have consumed the GigaWattHours capacity of a few suns and all the moisture in our solar system, but by Jeeves, we’ll get there!

      …but it won’t be that impressive once we remember concepts like “monkey, typing, Shakespeare” were already embedded in the training data.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    9
    ·
    5 months ago

    To be fair, if I wrote 3000 new lines of code in one shot, it probably wouldn’t run either.

    LLMs are good for simple bits of logic under around 200 lines of code, or things that are strictly boilerplate. People who are trying to force it to do things beyond that are just being silly.

    • Opisek@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      5 months ago

      Perhaps 5 LOC. Maybe 3. And even then I’ll analyze every single character in wrote. And then I will in fact find bugs. Most often it hallucinates some functions that would be fantastic to use - if they existed.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        5 months ago

        My guess is what’s going on is there’s tons of psuedo code out there that looks like it’s a real language but has functions that don’t exist as placeholders and the LLM noticed the pattern to the point where it just makes up functions, not realizing they need to be implemented (because LLMs don’t realize things but just pattern match very complex patterns).

    • Avicenna@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      I am on you with this one. It is also very helpful in argument heavy libraries like plotly. If I ask a simple question like “in plotly how do I do this and that to the xaxis” etc it generally gives correct answers, saving me having to do internet research for 5-10 minutes or read documentations for functions with 1000 inputs. I even managed to get it to render a simple scene of cloud of points with some interactivity in 3js after about 30 minutes of back and forth. Not knowing much javascript, that would take me at least a couple hours. So yeah it can be useful as an assistant to someone who already knows coding (so the person can vet and debug the code).

      Though if you weigh pros and cons of how LLMs are used (tons of fake internet garbage, tons of energy used, very convincing disinformation bots), I am not convinced benefits are worth the damages.

        • Avicenna@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          4 months ago

          If you do it through AI you can still learn. After all I go through the code to understand what is going on. And for not so complex tasks LLMs are good at commenting the code (though it can bullshit from time to time so you have to approach it critically).

          But anyways the stuff I ask LLMs are generally just one off tasks. If I need to use something more frequently, I do prefer reading stuff for more in depth understanding.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          5 months ago

          Play ASCII tic tac toe against 4o a few times. A model that can’t even draw a tic tac toe game consistently shouldn’t write production code.

        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          8
          arrow-down
          3
          ·
          edit-2
          4 months ago

          I tried, it can’t get trough four lines without messing up. Unless I give it tasks that are so stupendously simple that I’m faster typing them myself while watching tv

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Four lines? Let’s have realistic discussions, you’re just intentionally arguing in bad faith or extremely bad at prompting AI.

            • Boomkop3@reddthat.com
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              You can prove your point easily: show us a prompt that gives us a decent amount of code that isn’t stupidly simple or sufficiently common that I don’t just copy paste the first google result

              • Sl00k@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                I have nothing to prove to you if you wish to keep doing everything by hand that’s fine.

                But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn’t the case is just arguing in bad faith or you don’t work in the industry.

                • Boomkop3@reddthat.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  I do use it, it’s handy for some sloppy css for example. Emphasis on sloppy. I was kinda hoping you actually had something there

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        10
        ·
        edit-2
        5 months ago

        Uh yeah, like all the time. Anyone who says otherwise really hasn’t tried recently. I know it’s a meme that AI can’t code (and still in many cases that’s true, eg. I don’t have the AI do anything with OpenCV or complex math) but it’s very routine these days for common use cases like web development.

        • GreenMartian@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          8
          ·
          5 months ago

          They have been pretty good on popular technologies like python & web development.

          I tried to do Kotlin for Android, and they kept tripping over themselves; it’s hilarious and frustrating at the same time.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            5 months ago

            Not sure what you mean, boilerplate code is one of the things AI is good at.

            Take a straightforward Django project for example. Given a models.py file, AI can easily write the corresponding admin file, or a RESTful API file. That’s generally just tedious boilerplate work that requires no decision making - perfect for an AI.

            More than that and you are probably babysitting the AI so hard that it is faster to just write it yourself.

        • Maalus@lemmy.world
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          5 months ago

          I recently tried it for scripting simple things in python for a game. Yaknow, change char’s color if they are targetted. It output a shitton of word salad and code about my specific use case in the specific scripting jargon for the game.

          It all based on “Misc.changeHue(player)”. A function that doesn’t exist and never has, because the game is unable to color other mobs / players like that for scripting.

          Anything I tried with AI ends up the same way. Broken code in 10 lines of a script, halucinations and bullshit spewed as the absolute truth. Anything out of the ordinary is met with “yes this can totally be done, this is how” and “how” doesn’t work, and after sifting forums / asking devs you find out “sadly that’s impossible” or “we dont actually use cpython so libraries don’t work like that” etc.

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 months ago

            It’s possible the library you’re using doesn’t have enough training data attached to it.

            I use AI with python for hundreds line data engineering tasks and it nails it frequently.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            12
            ·
            5 months ago

            Well yeah, it’s working from an incomplete knowledge of the code base. If you asked a human to do the same they would struggle.

            LLMs work only if they can fit the whole context into their memory, and that means working only in highly limited environments.

            • Maalus@lemmy.world
              link
              fedilink
              arrow-up
              14
              arrow-down
              1
              ·
              5 months ago

              No, a human would just find an API that is publically available. And the fact that it knew the static class “Misc” means it knows the api. It just halucinated and responded with bullcrap. The entire concept can be summarized with “I want to color a player’s model in GAME using python and SCRIPTING ENGINE”.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      4 months ago

      Practically all LLMs aren’t good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn’t trust her writing production code 🤣

      Once a single model (doesn’t have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

      Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn’t compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

      I don’t say that AI (especially AGI) can’t replace humans. It definitely can and will, it’s just a matter of time, but state of the Art LLMs are basically just extremely good “search engines” or interactive versions of “stack overflow” but not good enough to do real “thinking tasks”.

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        5 months ago

        extremely good “search engines” or interactive versions of “stack overflow”

        Which is such a decent use of them! I’ve used it on my own hardware a few times just to say “Hey give me a comparison of these things”, or “How would I write a function that does this?” Or “Please explain this more simply…more simply…more simply…”

        I see it as a search engine that connects nodes of concepts together, basically.

        And it’s great for that. And it’s impressive!

        But all the hype monkeys out there are trying to pedestal it like some kind of techno-super-intelligence, completely ignoring what it is good for in favor of “It’ll replace all human coders” fever dreams.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        8
        ·
        5 months ago

        Cherry picking the things it doesn’t do well is fine, but you shouldn’t ignore the fact that it DOES do some things easily also.

        Like all tools, use them for what they’re good at.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          9
          arrow-down
          5
          ·
          5 months ago

          I don’t think it’s cherry picking. Why would I trust a tool with way more complex logic, when it can’t even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don’t have the capability to plan and think when they lose tic tac toe games.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            12
            ·
            5 months ago

            Why would I trust a drill press when it can’t even cut a board in half?

              • wischi@programming.dev
                link
                fedilink
                arrow-up
                4
                arrow-down
                2
                ·
                5 months ago

                I can’t speak for Lemmy but I’m personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it’s good for. But “thinking” is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

                It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can’t do.

                Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

                • Zexks@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  4 months ago

                  “I’m not again LLMs I just never say anything useful about them and constantly point out how I can’t use them.” The other guy is right and you just prove his point.

            • wischi@programming.dev
              link
              fedilink
              arrow-up
              14
              arrow-down
              2
              ·
              edit-2
              4 months ago

              A drill press (or the inventors) don’t claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them “reasoning” models, but still fail to beat my niece in tic tac toe, which by the way doesn’t have a PhD in anything 🤣

              LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

              If they can’t handle things they even saw during training (but sparsely, like tic tac toe) it wouldn’t be able to produce code you should use in production. I wouldn’t trust any junior dev that doesn’t set their O right next to the two Xs.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                5 months ago

                Sure, the marketing of LLMs is wildly overstated. I would never argue otherwise. This is entirely a red herring, however.

                I’m saying you should use the tools for what they’re good at, and don’t use them for what they’re bad at. I don’t see why this is controversial at all. You can personally decide that they are good for nothing. Great! Nobody is forcing you to use AI in your work. (Though if they are, you should find a new employer.)

                • wischi@programming.dev
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  5 months ago

                  Totally agree with that and I don’t think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That’s why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      2
      ·
      4 months ago

      And that’s what happens when you spend a trillion dollars on an autocomplete: amazing at making things look like whatever it’s imitating, but with zero understanding of why the original looked that way.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        arrow-up
        6
        arrow-down
        16
        ·
        edit-2
        4 months ago

        I mean, there’s about a billion ways it’s been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That’s how I know I and other humans have understanding, after all.

        What it’s not is aligned to care about anything other than making plausible-looking text.

        • Jtotheb@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          4 months ago

          Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

          Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

          And none of these tech companies even pretend that they’ve invented a caring machine that they just haven’t inspired yet. Don’t ascribe further moral and intellectual capabilities to server racks than do the people who advertise them.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            4 months ago

            Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.

            You got the “originality” part there, right? I’m talking about tasks that never came close to being in the training data. Would you like me to link some of the research?

            Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.

            Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It’s true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that’s not good enough, it’s easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you’re more interested in ignoring any empirical evidence, though.

                • Jtotheb@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

            • borari@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              It’s true that one is based on continuous floats and the other is dynamic peaks

              Can you please explain what you’re trying to say here?

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there’s no notion of time, it’s not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it’s dynamic; they can peak at any time and downstream neurons can begin to fire “early”.

                They do seem to be equivalent in some way, although AFAIK it’s unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.

                • borari@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

                  In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

                  Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

                  I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.

    • petey@aussie.zone
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      It needs good feedback. Agentic systems like Roo Code and Claude Code run compilers and tests until it works (just gotta make sure to tell it to leave the tests alone)

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    79
    arrow-down
    3
    ·
    5 months ago

    All programs can be written with on less line of code. All programs have at least one bug.

    By the logical consequences of these axioms every program can be reduced to one line of code - that doesn’t work.

    One day AI will get there.

    • gmtom@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      4 months ago

      All programs can be written with on less line of code. All programs have at least one bug.

      The humble “Hello world” would like a word.

      • Amberskin@europe.pub
        link
        fedilink
        arrow-up
        21
        ·
        4 months ago

        Just to boast my old timer credentials.

        There is an utility program in IBM’s mainframe operating system, z/OS, that has been there since the 60s.

        It has just one assembly code instruction: a BR 14, which means basically ‘return’.

        The first version was bugged and IBM had to issue a PTF (patch) to fix it.

        • Rose@slrpnk.net
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Reminds me of how in some old Unix system, /bin/true was a shell script.

          …well, if it needs to just be a program that returns 0, that’s a reasonable thing to do. An empty shell script returns 0.

          Of course, since this was an old proprietary Unix system, the shell script had a giant header comment that said this is proprietary information and if you disclose this the lawyers will come at ya like a ton of bricks. …never mind that this was a program that literally does nothing.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          10
          ·
          4 months ago

          Okay, you can’t just drop that bombshell without elaborating. What sort of bug could exist in a program which contains a single return instruction?!?

          • Amberskin@europe.pub
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            It didn’t clear the return code. In mainframe jobs, successful executions are expected to return zero (in the machine R15 register).

            So in this case fixing the bug required to add an instruction instead of removing one.

      • phx@lemmy.ca
        link
        fedilink
        arrow-up
        9
        ·
        4 months ago

        You can fit an awful lot of Perl into one line too if you minimize it. It’ll be completely unreadable to most anyone, but it’ll run

  • haui@lemmy.giftedmc.com
    link
    fedilink
    arrow-up
    73
    arrow-down
    2
    ·
    5 months ago

    Welp. Its actually very in line with the late stage capitalist system. All polish, no innovation.

  • markstos@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    8
    ·
    5 months ago

    This weekend I successfully used Claude to add three features in a Rust utility I had wanted for a couple years. I had opened issue requests, but no else volunteered. I had tried learning Rust, Wayland and GTK to do it myself, but the docs at the time weren’t great and the learning curve was steep. But Claude figured it all out pretty quick.

        • coherent_domain@infosec.pub
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          5 months ago

          This is interesting, I would be quite impressed if this PR got merged without additional changes.

          I am genuinely curious and no judgement at all, since you mentioned that you are not a rust/GTK expert, are you able to read and and have a decent understanding of the output code?

          For example, in the sway.rs file, you uncommented a piece of code about floating nodes in get_all_windows function, do you know why it is uncommented? (again, not trying to judge; it is a genuine question. I also don’t know rust or GTK, just curious.

          • markstos@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            2
            ·
            5 months ago

            This is interesting, I would be quite impressed if this PR got merged without additional changes.

            We’ll see. Whether it gets merged in any form, it’s still a big win for me because I finally was able to get some changes implemented that I had been wanting for a couple years.

            are you able to read and and have a decent understanding of the output code?

            Yes. I know other coding languages and CSS. Sometimes Claude generated code that was correct but I thought it was awkward or poor, so I had it revise. For example, I wanted to handle a boolean case and it added three booleans and a function for that. I said no, you can use a single boolean for all that. Another time it duplicated a bunch of code for the single and multi-monitor cases and I had it consolidate it.

            In one case, It got stuck debugging and I was able to help isolate where the error was through testing. Once I suggested where to look harder, it was able to find a subtle issue that I couldn’t spot myself. The labels were appearing far too small at one point, but I couldn’t see that Claude had changed any code that should affect the label size. It turned out two data structures hadn’t been merged correctly, so that default values weren’t getting overridden correctly. It was the sort of issue I could see a human dev introducing on the first pass.

            do you know why it is uncommented?

            Yes, that’s the fix for supporting floating windows. The author reported that previously there was a problem with the z-index of the labels on these windows, so that’s apparently why it was implemented but commented out. But it seems due to other changes, that problem no longer exists. I was able to test that labels on floating windows now work correctly.

            Through the process, I also became more familiar with Rust tooling and Rust itself.

        • Zoop@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          Oh my goodness, that’s adorable and sweet of your dog! Also, I’m so glad you had such a big laugh. I love when that happens.

          • Monument@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 months ago

            He’s a sweet guy. … Mostly. Very much in need of a lot of attention. Sometimes he just sits next to you on the couch and puts his paw on you if you’re not giving him enough attention.

            Here he is posing with his sister as a prop:
            A black and white heeler resting his head on a white and brown pit bull

            • Zoop@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              Oh my goodness, he sounds precious! I’ve had a sweet and needy dog like that in the past, too. It can be a lot, but I loved it (and miss it,) haha.

              Both your dogs are very cute! You and your pups gave me a much-needed smile. Thank you for that. :) Please give them some pets from me!

    • Madison420@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      5 months ago

      No the spell just fizzled. In my experience it happens far less often if you start with an Abra kabara and end it with an Alakazam!

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        5 months ago

        Yeah, the Abra kabara init and Alakazam cleanup are an important part, specially until you have become good enough to configure your own init.


        There is an alternative init, Abra Kadabra, which automatically adds a cleanup and some general fixes when it detects the end of the spell.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    5 months ago

    I’ve used it extensively, almost $100 in credits, and generally it could one shot everything I threw at it. However: I gave it architectural instructions and told it to use test driven development and what test suite to use. Without the tests yeah it wouldn’t work, and a decent amount of the time is cleaning up mistakes the tests caught. The same can be said for humans, though.

    • Lyra_Lycan@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      How can it pass if it hasn’t had lessons… Well said. Ooh I wonder if lecture footage would be able to teach AI, or audio in from tutors…