• micka190@lemmy.world
    link
    fedilink
    English
    arrow-up
    131
    arrow-down
    1
    ·
    edit-2
    19 days ago

    According to a Stack Overflow survey from 2025, 84 percent of developers now use or plan to use AI tools, up from 76 percent a year earlier. This rapid adoption partly explains the decline in forum activity.

    As someone who participated in the survey, I’d recommend everyone take anything regarding SO’s recent surveys with a truckfull of salt. The recent surveys have been unbelievably biased with tons of leading questions that force you to answer in specific ways. They’re basically completely worthless in terms of statistics.

    • chaosCruiser@futurology.today
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      11
      ·
      19 days ago

      Realistically though, asking an LLM what’s wrong with my code is a lot faster than scrolling through 50 posts and reading the ones that talk about something almost relevant.

      • Rob T Firefly@lemmy.world
        link
        fedilink
        English
        arrow-up
        63
        arrow-down
        23
        ·
        19 days ago

        It’s even faster to ask your own armpit what’s wrong with your code, but that alone doesn’t mean you’re getting a good answer from it

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          38
          arrow-down
          11
          ·
          19 days ago

          If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.

          • PlutoniumAcid@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            19 days ago

            Also depends on how you phrase the question to the LLM, and whether it har access to source files.

            A web chat session can’t do a lot, but an interactive shell like Claude Code is amazing - if you know how to work it.

          • chaosCruiser@futurology.today
            link
            fedilink
            English
            arrow-up
            37
            arrow-down
            2
            ·
            19 days ago

            Also depends on your level of expertise. If you have beginner questions, an LLM should give you the correct answer most of the time. If you’re an expert, your questions have no answers. Usually, it’s something like an obscure firmware bug edge case even the manufacturer isn’t aware of. Good luck troubleshooting that without writing your own drivers and libraries.

            • SkunkWorkz@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              18 days ago

              Yeah but in that edge case SO wouldn’t help either even before the current crash. Unless you were lucky. I find LLM useful to push me in the right direction when I’m stuck and documentation isn’t helping either not necessarily to give me perfectly written code. It’s like pair programming with someone who isn’t a coder but somehow has read all the documentation and programming books. Sometimes the left field suggestions it makes are quite helpful.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 days ago

                I’ve found some interesting and even good new functions by moaning my code woes to an LLM. Also, it has taken me on some pointless wild goose chases too, so you better watch out. Any suggestion has the potential to be anywhere from absolutely brilliant to a completely stupid waste of time.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              15
              ·
              19 days ago

              If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.

              Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.

              • chaosCruiser@futurology.today
                link
                fedilink
                English
                arrow-up
                5
                ·
                19 days ago

                Boring standard coding is exactly where you can actually let the LLM write the code. Manual intervention and review is still required, but at least you can speed up the process.

                • Aceticon@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  arrow-down
                  2
                  ·
                  edit-2
                  18 days ago

                  Code made up of severally parts with inconsistently styles of coding and design is going to FUCK YOU UP in the middle and long terms unless you never again have to touch that code.

                  It’s only faster if you’re doing small enough projects that an LLM can generate the whole thing in one go (so, almost certainly, not working as professional at a level beyond junior) and it’s something you will never have to maintain (i.e. prototyping).

                  Using an LLM is like giving the work to a large group of junior developers were each time you give them work it’s a random one that picks up the task and you can’t actually teach them: even when it works, what you get is riddled with bad practices and design errors that are not even consistently the same between tasks so when you piece the software together it’s from the very start the kind of spaghetti mess you see in a project with lots of years in production which has been maintained by lots of different people who didn’t even try to follow each others coding style plus since you can’t teach them stuff like coding standards or design for extendability, it will always be just as fucked up as day one.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            22
            arrow-down
            4
            ·
            19 days ago

            How do you know it’s a good answer? That requires prior knowledge that you might have. My juniors repeatedly demonstrate they’ve no ability to tell whether an LLM solution is a good one or not. It’s like copying from SO without reading the comments, which they quickly learn not to do because it doesn’t pass code review.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              4
              ·
              18 days ago

              This is the big issue. LLMs are useful to me (to some degree) because I can tell when its answer is probably on the right track, and when it’s bullshit. And still I’ve occasionally wasted time following it in the wrong direction. People with less experience or more trust in LLMs are much more likely to fall into that trap.

              LLMs offer benefits and risks. You need to learn how to use it.

            • MagicShel@lemmy.zip
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              edit-2
              18 days ago

              That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.

              If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of things by hand?

              (Below is just skippable anecdotes)


              Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.

              So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.

              In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.

              This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.

              So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.

              But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.


              Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.

              He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.

              Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    18 days ago

    I’ve posted questions, but I don’t usually need to because someone else has posted it before. this is probably the reason that AI is so good at answering these types of questions.

    the trouble now is that there’s less of a business incentive to have a platform like stack overflow where humans are sharing knowledge directly with one another, because the AI is just copying all the data and delivering it to the users somewhere else.

    • Gsus4@mander.xyzOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      18 days ago

      What we’re all afraid is that cheap slop is going to make stack broke/close/bought/private and then it will be removed from the public domain…then jack up the price of islop when the alternative is gone…

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 days ago

        I do wonder then, as new languages and tools are developed, how quickly will AI models be able to parrot information on their use, if sources like stackoverflow cease to exist.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 days ago

          I think this is a classic of privatization of commons, so that nobody can compete with them later without free public datasets…

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 days ago

          It’ll certainly be of lesser quality even if they go through steps to make it able to address it.

          good documentation and open projects ported might be enough to give you working code, but it’s not going to be able to optimize it without being trained on tons of optimization data.

        • Gsus4@mander.xyzOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          18 days ago

          But can anyone train on them? What happens to the original dataset?

          • falseWhite@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            There are open weight models that users can download and run locally. Because the weights are open, they can be customised and fine tuned.

            And then there are fully open source models, that publish everything, the model with open weights, the training source code, as well as the full training dataset.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      18 days ago

      The hot concept around the late 2000’s and early 2010’s was crowdsourcing: leveraging the expertise of volunteers to build consensus. Quora, Stack Overflow, Reddit, and similar sites came up in that time frame where people would freely lend their expertise on a platform because that platform had a pretty good rule set for encouraging that kind of collaboration and consensus building.

      Monetizing that goodwill didn’t just ruin the look and feel of the sites: it permanently altered people’s willingness to participate in those communities. Some, of course, don’t mind contributing. But many do choose to sit things out when they see the whole arrangement as enriching an undeserving middleman.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        18 days ago

        Probably explains why quora started sending me multiple daily emails about shit i didn’t care about and removed unsubscribe buttons form the emails.

        I don’t delete many accounts… but that was one of them

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      ·
      18 days ago

      Works well for now. Wait until there’s something new that it hasn’t been trained on. It needs that Stack Exchange data to train on.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        18 days ago

        Yes, I think this will create a new problem. new things won’t be created very often, at least not from small house or independent developers, because there will be this barrier to adoption. corporate controlled AI will need to learn them somehow

  • Endymion_Mallorn@kbin.melroy.org
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    19 days ago

    I mean, people who don’t want their questions or answers included in an LLM won’t use SO. When people want to ask a question and not be shut down or berated, they’ll probably end up on HN.

  • UltraBlack@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    19 days ago

    I often feel like every question has been asked and answered already.

    I still like SO

    • MoogleMaestro@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      Will the AI still flame me if I ask the wrong question?

      Is nothing sacred anymore?

      Real talk though, it is concerning when it feels like 3/5 times you ask AI something, you’ll get a completely hair brained answer back. SO will probably need to clamp down non-logged in browsing and enforce API limits to make sure that AI trainers are paying for the data they need.

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        Depends on the model, I think Opus 4.5 is the only model that I’ve prompted which is getting close to not just being a boring sycophant.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 days ago

    I think a lot of good information is being scrubbed or controlled and, very soon, what was once free, you will be charged for.

    • SkaveRat@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 days ago

      A SO paywall would be ironic, as one of the main reasons for its creation was that experts(-)exchange was paywalled and annoying

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    18 days ago

    Good. That site has been a toxic hole in the ground for a decade or more.