• threeduck@aussie.zone
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    35
    ·
    8 months ago

    All the people here chastising LLMs for resource wastage, I swear to god if you aren’t vegan…

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      I mean, they’re both bad.

      But also, “Throw that burger in the trash I’m not eating it” and “Uninstall that plugin, I’m not querying it” have about the same impact on your gross carbon emissions.

      These are supply side problems in industries that receive enormous state subsides. Hell, the single biggest improvement to our agriculture policy was when China stopped importing US pork products. So, uh… once again, thank you China for saving the planet.

      • lowleekun@ani.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 months ago

        Wait so the biggest improvement came when there was a massive decline in demand?

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          The Demand didn’t decline. The state imposed a strict high barrier to trade that prevented it from being fulfilled.

          • lowleekun@ani.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            So you did not export that much to China or was there a big “eat more pork” campaign because else where did the demand come from afterwards?

      • stratoscaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        8 months ago

        What is it with vegans and comparing literally everything to veganism? I was in another thread and it was compared to genocide, rape, and climate change all in the same thread. Insanity

      • 3abas@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        7
        ·
        8 months ago

        It’s not, you’re just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That’s not negligible.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          you’re just personally insulted.

          I swear to God this attitude is why people don’t like what you’re saying. I am all for weighing the two against each other but the “I am more moral than thou” is why I left the church.

        • k0e3@lemmy.ca
          link
          fedilink
          English
          arrow-up
          10
          ·
          8 months ago

          So, I can’t complain about any part of the remaining 85% if I’m not vegan? That’s so fucking stupid. Do you not complain about microplastics because you’re guilty of using devices with plastic in them to type your message?

          • 3abas@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            Yes, I’m a piece of shit for using a phone made by a capitalist corporation and contributes to harming the planet. I don’t deny that I live in a horrible society that forces me to be a bad human just to survive.

            I also don’t call people stupid for telling me my device is bad for the environment. I still eat meat, I’m not a vegan, but I understand and completely agree that it’s terrible for the environment. By recognizing it, I can be conscious of my consumption and reduce it.

            I also use LLMs conservatively, I use them where they add value and I don’t use them frivolously to generate shitty AI slop.

            I’m conscious of its dangers and that drives my consumption of it.

            But I don’t pick and choose. I don’t eat animal products three meals a day and bitch about someone using an LLM to edit a file instead of manually working on it for five hours.

            Just be consistent is the message they were communicating, not that you shouldn’t complain about 85%.

            • k0e3@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              Same, I’m very aware that my selfish actions cause harm to the environment and I do try to be conservative about meat, electricity, and water usage. I don’t even own a car.

              But “I swear to God, if you aren’t vegan,” which is what OP said, is hardly the same as “keep it consistent.” It feels like they’re telling us both that our efforts are pointless because we aren’t vegan. They could have said, try cutting meat from your diet to help more, or give veganism a thought. It comes off as insufferably arrogant, you know?

              I’ll end my rant now, haha. Sorry.

              • 3abas@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                8 months ago

                They were trying to be funny, don’t be too literal.

                I think (that’s how I interrupted it, I don’t know) the intent was to reflect the insufferably arrogant tone of most people who exclusively complain about AI as if it has no benefits and will be the sole destroyer of our society.

                • Bunbury@feddit.nl
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 months ago

                  Were they though? If that was sarcasm I really don’t think it translated well into text at all.

          • threeduck@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            8 months ago

            Imagining complaining about someone tipping out fresh water, while eating a burger when a single kilo of beef uses between 15,000 to 200,000 liters.

            Like, until you stop doing the worst thing a single consumer can do to the planet for literally nothing but greed and pleasure (eating meat instead of healthier alternatives), you have no leg to criticize.

    • Bunbury@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.

      • threeduck@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 months ago

        1 beef burger is equivalent in water consumption to 7.5million chatgpt queries.

        • Bunbury@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          I did a quick calculation and got to around 500 queries per quarter pounder. Lot of guesstimation and rounding though, but I’m pretty sure I got close enough to know that you’re off by quite a lot.

          Edit to add: I used 21.9kg CO2 per 1kg of beef and 4.32 grams per ChatGPT query for my rough estimate.

          However that 4.32 number is already over a year old. Chances are it’s way outdated but everyone still keeps on quoting it. It definitely does not take into account that ChatGPT often “thinks” now, because chain of thought is likely as expensive as multiple queries by itself. Additionally the models are more advanced than a year ago, but also more costly and that CO2 amount everyone keeps quoting doesn’t even mention which model they used. If anyone can find the original source of this number I’d be very curious.

            • Bunbury@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 months ago

              Fair.

              Water use comes out to about 150.000 chatGPT queries per quarter pounder. Using 10ml per prompt and 15.000l per kg of beef.

              Still off by many orders of magnitude.

              Also that’s just the running costs. If we go into training we’re looking at a comparison the other way around. Training GPT-3 cost 700.000 liters of water. So that’s 466.6 quarter pounders.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.

      • threeduck@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        The holocaust was well scaled too. Animal ag is responsible for 15-20% of the entire planets GHG emissions. You can live a healthier, more morally consistent life if you give up meat.

    • lowleekun@ani.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      Dude, wtf?! You can’t just go around pointing out peoples hypocrisy. Companies killing the planet is big bad.

      People joining in? Dude just let us live!! It is only animals…

      big /s

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      8 months ago

      Most certainly it won’t happen until after AI has developed a self-preservation bias. It’s too bad the solution is turning off the AI.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Current genAI? Never. There’s at least one breakthrough needed to build something capable of actual thinking.

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    8 months ago

    Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      Bingo. If you routinely use LLM’s/AI you’ve recently seen it first hand. ALL of them have become noticeably worse over the past few months. Even if simply using it as a basic tool, it’s worse. Claude for all the praise it receives has also gotten worse. I’ve noticed it starting to forget context or constantly contradicting itself. even Claude Code.

      The release of GPT5 is proof in the pudding that a wall has been hit and the bubble is bursting. There’s nothing left to train on and all the LLM’s have been consuming each others waste as a result. I’ve talked about it on here several times already due to my work but companies are also seeing this. They’re scrambling to undo the fuck up of using AI to build their stuff, None of what they used it to build scales. None of it. And you go on Linkedin and see all the techbros desperately trying to hype the mounds of shit that remain.

      I don’t know what’s next for AI but this current generation of it is dying. It didn’t work.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        8 months ago

        Any studies about this “getting worse” or just anecdotes? I do routinely use them and I feel they are getting better (my workplace uses Google suite so I have access to gemini). Just last week it helped me debug an ipv6 ra problem that I couldn’t crack, and I learned a few useful commands on the way.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        I was initially impressed by the ‘reasoning’ features of LLMs, but most recently ChatGPT gave me a response to a question in which it stated five or six possible answers sparated by “oh, but that can’t be right, so it must be…”, and none of them was right lmao. Thought for like 30 seconds to give me a selection of wrong answers!

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 months ago

      He’s also already admitted that they’re out of training data. If you’ve wondered why a lot more websites will run some sort of verification when you connect, it’s because there’s a desperate scramble to get more training data.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.

  • SGforce@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    8 months ago

    It’s the same tech. It would have to be bigger or chew through “reasoning” tokens to beat benchmarks. So yeah, of course it is.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    8 months ago

    So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it’s going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn’t it?

    • BillyTheKid@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 months ago

      I mean, AI is using like 1-2% of human energy and that’s fucking wild.

      My take away is we need more clean energy generation. Good things we’ve got countries like China leading the way in nuclear and renewables!!

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        8 months ago

        Yes, China is producing a lot of solar panels (a good thing!) but the percentage of renewables is actually going down. They are adding coal faster than solar.

      • Womble@piefed.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        8 months ago

        Do you have a source for that? Because given a chatgpt query takes a similar amount of energy to running a hair dryer for a few seconds i find it hard to believe.

        • Rimu@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 months ago

          a similar amount of energy to running a hair dryer

          We see a lot of those kinds of comparisons. Thing is, you run a hair dryer once per day at most. Or it’s compared to a google search, often. Again, most people will do a handful of searches each day. A ChatGPT conversation can be hundreds of messages back and forth. A Claude Code session can go for hours and involve millions of tokens. An individual AI inference might be pretty tame but the quantity of them is another level.

          If it was so efficient then they wouldn’t be building Manhatten-sized datacenters.

          • Womble@piefed.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            edit-2
            8 months ago

            ok, but running a hairdryer for 5 minutes is well up into the hundreds of queries which is more than the vast majority of people will use in a week. The post I replied to was talking about it being 1-2% of energy usage, so that includes transport, heating and heavy industry. It just doesnt pass the smell test to me that something where a weeks worth of usage is exceeded by a person drying their hair once is comparable with such vast users of energy.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        8 months ago

        All I know is that I’m getting real tired of this Matrix / Idiocracy Mash-up Movie we’re living in.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      its like crypto, they wanted to make money of VC funds, and thats probably running dry right now, and the investors are probably going to demand returns very soon. why do you think the massive layoffs started in 2023.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      8 months ago

      those are his lying/making up hand gestures. its the same thing trump does with his hands when hes lying or exaggerating, he does the wierd accordian hands.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    8 months ago

    Photographer1: Sam, could you give us a goofier face?

    *click* *click*

    Photographer2: Goofier!!

    *click* *click* *click* *click*

    • cenzorrll@piefed.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      8 months ago

      He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    8 months ago

    I have to test it with Copilot for work. So far, in my experience its “enhanced capabilities” mostly involve doing things I didn’t ask it to do extremely quickly. For example, it massively fucked up the CSS in an experimental project when I instructed it to extract a React element into its own file.

    That’s literally all I wanted it to do, yet it took it upon itself to make all sorts of changes to styling for the entire application. I ended up reverting all of its changes and extracting the element myself.

    Suffice to say, I will not be recommending GPT 5 going forward.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 months ago

          I’ve tried threats in prompt files, with results that are… OK. Honestly, I can’t tell if they made a difference or not.

          The only thing I’ve found that consistently works is writing good old fashioned scripts to look for common errors by LLMs and then have them run those scripts after every action so they can somewhat clean up after themselves.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 months ago

      We moved to m365 and were encouraged to try new elements. I gave copilot an excel sheet, told it to add 5% to each percent in column B and not to go over 100%. It spat out jumbled up data all reading 6000%.

    • GenChadT@programming.dev
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      8 months ago

      That’s my problem with “AI” in general. It’s seemingly impossible to “engineer” a complete piece of software when using LLMs in any capacity that isn’t editing a line or two inside singular functions. Too many times I’ve asked GPT/Gemini to make a small change to a file and had to revert the request because it’d take it upon itself to re-engineer the architecture of my entire application.

      • hisao@ani.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        8 months ago

        I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Are you using Copilot in agent mode? That’s where it breaks shit. If you’re using it in ask mode with the file you want to edit added to the chat context, then you’re probably going to be fine.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          8 months ago

          It’s a lot of people not understanding the kinds of things it can do vs the things it can’t do.

          It was like when people tried to search early Google by typing plain language queries (“What is the best restaurant in town?”) and getting bad results. The search engine had limited capabilities and understanding language wasn’t one of them.

          If you ask a LLM to write a function to print the sum of two numbers, it can do that with a high success rate. If you ask it to create a new operating system, it will produce hilariously bad results.

            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              8 months ago

              It is replacing entire humans. The thing is, it’s replacing the people you should have fired a long time ago

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              8 months ago

              I can blame the user for believing the marketing over their direct experiences.

              If you use these tools for any amount of time it’s easy to see that there are some tasks they’re bad at and some that they are good at. You can learn how big of a project they can handle and when you need to break it up into smaller pieces.

              I can’t imagine any sane person who lives their life guided by marketing hype instead of direct knowledge and experience.

              • ErmahgherdDavid@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 months ago

                I can’t imagine any sane person who lives their life guided by marketing hype instead of direct knowledge and experience.

                I mean fair enough but also… That makes the vast majority of managers, MBAs, salespeople and “normies” like your grandma and Uncle Bob insane.

                Actually questioning stuff that sales people tell you and using critical thinking is a pretty rare skill in this day and age.

                • AlfredoJohn@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  8 months ago

                  That makes the vast majority of managers, MBAs, salespeople and “normies” like your grandma and Uncle Bob insane.

                  Correct most of these people are insane, the average person is so fucking dumb and insane today its mind numbing.

        • GenChadT@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          It’s an issue of scope. People often give the AI too much to handle at once, myself (admittedly) included.

    • Vanilla_PuddinFudge@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Ai assumes too fucking much. I’d used it to set up a new 3D printer with klipper to save some searching.

      Half the shit it pulled down was Marlin-oriented then it had the gall to blame the config it gave me for it like I wrote it.

      “motherfucker, listen here…”

  • redsunrise@programming.dev
    link
    fedilink
    English
    arrow-up
    291
    arrow-down
    2
    ·
    8 months ago

    Obviously it’s higher. If it was any lower, they would’ve made a huge announcement out of it to prove they’re better than the competition.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      8 months ago

      I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      8 months ago

      Unless it wasn’t as low as they wanted it. It’s at least cheap enough to run that they can afford to drop the pricing on the API compared to their older models.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      8 months ago

      It’s cheaper though, so very likely it’s more efficient somehow.

        • Sl00k@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          8 months ago

          The only accessible data come from mistral, most other ai devs are not exactly happy to share the inner workings of their tools.

          Important to point out this is really only valid towards Western AI companies. Chinese AI models have mostly been open source with open papers.

    • Ugurcan@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      9
      ·
      edit-2
      8 months ago

      I’m thinking otherwise. I think GPT5 is a much smaller model - with some fallback to previous models if required.

      Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL).

      And 2025’s investors doesn’t give a flying fuck about energy efficiency.

  • C1pher@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    “Just a few more trillion dollars bro, then itll be ready…” Like a junkie.