Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I think its important to figure out what you mean by AI?

    Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.

    AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.

    If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.

  • Pulptastic@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.

  • mad_djinn@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    force companies to pay for the data they scraped from copyrighted works. break up the largest tech conglomerates so they cannot leverage their monopolistic market positions to further their goals, which includes the investment in A.I. products.

    ultimately, replace the free market (cringe) with a centralized computer system to manage resource needs of a socialist state

    also force Elon Musk to receive a neuralink implant and force him to hallucinate the ghostly impressions of spongebob squarepants laughing for the rest of his life (in prison)

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 months ago

    I just want my coworkers to stop dumping ai slop in my inbox and expecting me to take it seriously.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I dunno. It’s better than their old, non-AI slop 🤷

      Before, I didn’t really understand what they were trying to communicate. Now—thanks to AI—I know they weren’t really trying to communicate anything at all. They were just checking off a box 👍

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I’m not against AI itself—it’s the hype and misinformation that frustrate me. LLMs aren’t true AI - or not AGI as the meaning of AI has drifted - but they’ve been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they’re still early-stage software, causing harm through misinformation and widespread copyright issues. They’re being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

    Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

  • Bwaz@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    2 months ago

    I’d like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn’t labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.

  • Soapbox1858@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    I think many comments have already nailed it.

    I would add that while I hate the use of LLMs to completely generate artwork, I don’t have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It’s not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it’s great. These use cases help streamline otherwise tedious bullshit work that photographers usually don’t want to do.

    I also think it’s great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

  • banshee@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    2 months ago

    I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

    I would much rather see the field advanced by scientific and academic interests. Let’s focus on solving problems that help everyone instead of temporarily boosting profit margins.

    I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

  • Jumi@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Shut it off until they figure out how to use a reasonable amount of energy and develop serious rules around it

  • AsyncTheYeen@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity

    We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t

    Captalist democracy is not real democracy

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.