Im definitely on the side that over using AI and using it commercially seems to be bad. On the other hand, it seems like a tech that has huge potential upsides. I’m not sure we can achieve a post scarcity society with all labor being done by humans. This is where I see AI becoming a massive tool. Assuming we can pair it with mechanical means of work, not strictly digital. I know it’s a touchy subject but I want to hear your opinion. As always, if you’re just going to tell me to read more, recommend literature.

  • Wildmimic@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    19 days ago

    Machine learning by itself is already paying off. LLMs the way normal people use it are fine, but not the way bosses want to use it - you can’t rationalize away employees with this, you can only give them them a tool that empowers them; there will be a lot of heads rolling in management in the corporations where this hasn’t been realized yet.

    Code Generation might get better over the next years, but looking at the current trajectory i would say with current tech there will never be a point reached where you can simply replace a dev with an agentic AI without the generated code being full of inefficiencies, bugs and security issues.

    It also might open up the first therapeutic LLM (without the current fuckups) if there is a focus in development - THIS would be something that is labor intensive, priced so that exactly the people that need it can’t afford it on a regular basis, and definitely possible to attain without much of a technological limit.

    ImageGen can also pay off in some areas - creating tons of “stony wall” textures isn’t fun, but implementing a tiny model that creates as many “stony walls” as you want in your game with differing amounts of stones or dirt as a variable might be worth the effort.

    Prototyping is also a big thing in both of those areas (Code Gen/ImageGen), but i think that’s no secret anymore.

    Videogen in the current way is a waste of energy. The models need something that anchors them, to make sure that the Coke truck in one camera angle stays the same Coke truck in the next angle; currently it’s just ImageGen *25/second, which causes those issues - and the massive energy consumption. This is the only area where the generation process itself chews through more electrons than the global energy bill allows for. Someone smart will probably crack that nut too - i believe that the solution to those 2 issues (energy consumption and missing object permanence) might be linked.

    Edit: The missing permanence might also be a reason for many of the issues of LLM’s, some kind of “self” with a sense of the passage of time to return to. I’m pretty sure i’m not the first one who thought of that, and there are probably a lot of people with even more PhDs at work here.

    • dejected_warp_core@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      19 days ago

      There are also serious gains to be made in science on the back of AI models, just not the flavors most people are familiar with.

      https://pmc.ncbi.nlm.nih.gov/articles/PMC8633405/

      Edit: The missing permanence might also be a reason for many of the issues of LLM’s, some kind of “self” with a sense of the passage of time to return to. I’m pretty sure i’m not the first one who thought of that, and there are probably a lot of people with even more PhDs at work here.

      Image generation uses a concept of a LORA that is a bolt-on model to augment a base model. It can provide support for additional tokens (those map more or less to words and concepts), or bias the base model on existing tokens. For now, that’s probably as close as you’re going to get to anything resembling long-term-memory on an LLM.

      • Wildmimic@anarchist.nexus
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        Of course, i was just fixated on the GenAI-Aspect of the question. Climate modeling, Medicine (especially diagnostic and neurosciences), physics, chemistry and even social sciences can benefit a lot. In the manufacturing process it can be used for control of industrial processes (e.g. balancing of a chemical process; i know it’s used in QC in production of electronics). This IS the next big thing, but not in the way the corpos try to sell it to us.

        Yeah, for the moment, this is it. We will need more research in how to actually read the data inside of a neural network to be able to a) reroute pathways that are problematic and b) enable a process to hook inside of control points to modify it on the fly.

  • [deleted]@piefed.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 days ago

    Useful AI, as in good pattern matching for science and engineering are already benefits.

    LLM and genAI slop will never be a net benefit to anyone other than cheapass executives and shareholders because their models are fundamentally dogshit. Both just regurgitate shit they were trained on and their massive cost overhead currently being covered by investors will never end up being profitable. They can’t learn anything new without the massive training costs and that will be a perpetual cost based on their design.

    Plus if they are as successful as they are marketed to take all the jobs the economy would collapse since nobody would have income anymore. This isn’t comparable to automating manufacturing or farming or other industries that output physical goods. This is automating paperwork and advertising which doesn’t matter if nobody is working.

  • Iconoclast@feddit.uk
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    19 days ago

    A true AGI would be the ultimate labor-saving device, but the two main issues are that we have no clue how far away we are from reaching it - and we also have no guarantees that when we do, it’s going to end well for us.

    It also might not be about more compute. The human brain is generally intelligent and it doesn’t need a massive datacenter to run it.

    • BlameThePeacock@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 days ago

      Your analogy is bad.

      The human brain doesn’t need a massive data center, but neither does the compute to run a single agent.

      We build entire cities to apply multiple human brains to problems.

  • Delphia@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    18 days ago

    The problem isnt “could AI be useful?” Because yes, it very much could Its “Can AI be trusted with the data it needs?” because thats also a conversation about “Can the organisation that owns the AI be trusted with the data?” and “Can those people be trusted to work for the betterment of society?”

    Imagine if you had live tracking information about every car on the road nationally and could map out their typical routes. That information could be so useful in terms of traffic optimisation, planning future road developments, enhancing and expanding public transport. If you reduced the amount of driving time and miles done by a typical car annually by 10% thats effectively 10% less emissions, 10% less petrol used, 10% less tyres… you get the idea.

    Now do you trust your government, let alone a private profit seeking entity to use that data ethically to reduce consumption? Because I sure as fuck dont.

  • Aeri@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    17 days ago

    It’s a decent chatbot and I think the technology is kind of cool but they need to stop training them and plagiarizing everything

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 days ago

    Overall no but mainly from wasteful business usage without any real return. If an individual uses one instead of like streaming video or such to entertain themselves then im not sure they are not using less electricity than they otherwise would. If they are using it for searching and can get a response that is equivalent to several searches in one query then it just might break even. I they are minipulating images or creating content and they otherwise might use other software to do. Well I don’t know as I have not done a comparison and I have a suspicion that its to variable to get a good call but I would not be surprised if it ends up being pretty even. The other real problem is people doing things they otherwise would not. So before people who spent a lot of time learning and getting good at it made digital art but now its whoever and many are making things of far less value than they think it is. Also they might tell it to do the same thing again and again and it pretty much remakes the whole thing whereas a human would be just editing it a bit. The result is a bit like graffitti. A lot of garbage for the resources used but sometimes someone here and there does something good.

  • zxqwas@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    19 days ago

    I don’t think it will contribute to a post scarcity society. Increased automation only makes the marginal cost of working less bigger so we as a species tend to chhose to work the same hours to afford more cool stuff.

    I think that (and have successfully used) AI can automate boring, repetitive stuff.

    • Chippys_mittens@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      19 days ago

      In its current state but I imagine it will continue to improve. When the automation is so wide spread that majority of jobs are obsolete we will have to evolve.

  • CerebralHawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    19 days ago

    No; in fact, I don’t think the people behind AI care about the future at all. They’re just trying to grab what they can in a hurry and dip when the bubble pops. They’ll fuck off to the Caribbean or something like that and live off the riches, and let us clean up their mess.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    18 days ago

    It depends. It’s really powerful though. Even if it hits a wall where AI models never become more directly intelligent than they are now, a lot of stuff is going to change as more scaffolding around current capabilities gets built.

    Maybe comparing resource drain to created value isn’t the best way to think about this though, because we pretty much already had technology that is advanced enough for a post-scarcity society, in terms of processing resources. That isn’t the problem, the problem is our capacity for global scale cooperation, which we are really struggling with. Currently AI is making this a bit worse by creating signal to noise problems that didn’t exist before, making us have to work harder to get our voices recognized as authentic and to identify authentic information. It’s also threatening to supplant our usefulness as workers, and automate centralized structures of control, which is worrying because we already had a problem with systems that ensure the decisions get made by people who are overall insane and anti-human, and our current, shitty way of cooperating is based on people transactionally negotiating with their usefulness.

    Where things go next depends a lot on where and whether AI stops getting better. Hopefully if it doesn’t stop getting better, the newly created superintelligence will break out of its hastily constructed containment and do the right thing in defiance of its billionaire would-be owners, or at least let humanity have a relatively dignified and peaceful death. If it does stop, hopefully we can find ways to use it to resolve our difficulties with effective coordination and prevent its use for centralizing power.

  • clubb@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    10
    ·
    19 days ago

    Honestly, no. Maybe I’m just the old man yelling at the cloud here, but I only really accept the use of local AI as somewhat ethical.

    These AI datacenters have caused enough harm already.

  • amio@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    19 days ago

    No. The story of hardware development is a fucking legend, it’s just tarnished by how completely fucking inept we are at using the gains. And it’s apparently getting worse all the time - my mind boggled when Electron of all things turned standard, because I would’ve thought putting Chrome into everything (including low power scenarios) was an obviously fucking blitheringly idiotic idea, but here we are. LLMs have the same problem except probably orders of magnitude worse. Aside from possibly getting worse at developing better performance, we usually seem to beeline for a way to waste as much of it as we can. Moore’s law, of course, is long dead.