• Truscape@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    56
    arrow-down
    12
    ·
    edit-2
    4 months ago

    Distributed platform owned by no one founded by people who support individual control of data and content access

    Majority of users are proponents of owning what one makes and supporting those who create art and entertainment

    AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit

    Buddy, do you know what you’re here for?

    EDIT: removed bot accusation, forgot to check user history

    • dactylotheca@suppo.fi
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      7
      ·
      4 months ago

      Or are you yet another bot lost in the shuffle?

      Yes, good job, anybody with opinions you don’t like is a bot.

      It’s not like this was even a pro-AI post rather than just pointing out that even the most facile “ai bad, applause please” stuff will get massively upvoted

        • dactylotheca@suppo.fi
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          4
          ·
          4 months ago

          HaVe YoU ConSiDeReD thE PoSSiBiLiTY that I’m not pro-AI and I understand the downsides, and can still point out that people flock like lemmings (*badum tss*) to any “AI bad” post regardless of whether it’s actually good or not?

          • Doll_Tow_Jet-ski@fedia.io
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            4 months ago

            Ok, so your point is: Look! People massively agree with an idea that makes sense and it’s true.

            Color me surprised…

          • grrgyle@slrpnk.net
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            4 months ago

            Why would a post need to be good? It just needs a good point. Like this post is good enough, even if I don’t agree that we have enough facile ai = posts.

            Depends on the community, but for most of them pointing out ways that ai is bad is probably relevant, welcome, and typical.

        • Voyajer@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Why would you lend and credence to the weakest appeal to the masses presented on the site?

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        8
        arrow-down
        5
        ·
        4 months ago

        Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.

        Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      you do realize mechanized looms were used to put people out of jobs and very very very clearly harm them, right? this isn’t an argument in favour of AI, it’s an argument against it.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      4 months ago

      i’m pro-AI (with huuuuge caveats) but i disagree with this… AI reduces certain jobs in a similar way, but it also enables large scale manipulation and fucks with our thought processes on a large scale

      i’d say it’s like if a mechanised weaving loom also invented the concept of disinformation and propaganda

      … but also, mechanised weaving loom effected a single industry: modern ML has the potential to effect the large majority of people: it’s on a different scale than the disruption of the textile industry

      • naught101@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        Agree it’s on a different scale (everything is relative to 200 years ago).

        One of the main “benefits” of mechanised factory machinery in the early 1800s was that shifted the demand side of labour, such that capitalists had far more control over it. I reckon that counts as a kind of large scale manipulation (but yeah, probably not as pervasive of other domains of life).

  • ssillyssadass@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    20
    ·
    4 months ago

    I find it very funny how just a mere mention of the two letters A and I will cause some people to seethe and fume, and go on rants about how much they hate AI, like a conservative upon seeing the word “pronouns.”

  • qyron@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    4 months ago

    True. Now shut up and take my upvote! Jo need for arguments; all has already been said.

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      4 months ago

      I prefer the fine vintage of a M$ = bad post, myself.

      Or perhaps even a spicy little Ubuntu = bad post.

  • chunes@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    15
    ·
    4 months ago

    I’m a lot more sick of the word ‘slop’ than I am of AI. Please, when you criticize AI, form an original thought next time.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    47
    arrow-down
    3
    ·
    4 months ago

    The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)

    I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.

    And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.

    I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.

    • frog@feddit.uk
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      4 months ago

      Why the hell are you being downvoted? You are completely right.

      People will look back at this and “hover boards” and will think “are they stupid!?”

      Mislabeling a product isn’t great marketing, it’s false advertisement.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        AI is an umbrella term that holds many thing. We have been referring to simple path finding algorithms in video games as AI for two decades, llms are AIs.

        • occultist8128@infosec.pub
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Yes LLMs are AI, who don’t agree with this is stupid, sorry. But saying AI means LLM is wrong. Kindly take a look on my reason here why I always against to language misusage https://infosec.pub/comment/17417999. You English speakers sometimes make things harder to understand by misusing terms like ‘literally’ or using ‘AI’ to mean only LLMs. Language is meant to clarify, not confuse, and this shift in meaning can lead to misunderstandings, especially when talking about technical concepts.

        • frog@feddit.uk
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          There is a distinction between video game AI and computer science AI. People know that video game AI isn’t really AI. How LLM is marketed by using terms like “super intelligence” is deception.

          No one is typing prompts out to NPC asking if dogs can eat chocolate.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Calling an llm an AI isnt saying it’s super intelligent and I don’t know of any company that it is marketing it like that. There aren’t multiple definitions of AI depending on the industry you are in.

            Just read the wiki, it is pretty clear. Something does not have to be “intelligent” to be considered AI, just like a shooting star isn’t actually a star. Its an umbrella term that holds many things including video game pathfinding, llms, recommendation systems, autonomous driving solutions, etc.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        IDK LMAO, that’s what I really hate about Reddit/Lemmy, the voting system. People downvote but don’t tell where I’m wrong in their opinion. I mean, at least argue — say out loud your (supposedly harmless) opinion. I even added a disclaimer there that I don’t promote LLM and such stuff. I don’t really care either, I stand with correctness and do what I can to correct what is wrong. I totally agree with @sentient_loom@sh.itjust.works tho.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        4
        arrow-down
        6
        ·
        edit-2
        4 months ago

        The LLM shills have made “AI” refer exclusively to LLMs.

        Yes, I agree and it’s unacceptable for me. Now most people here are also falling in the same hole. I’m here not to promote/support/standing with LLM or Gen-AI, I want to correct what is wrong. You can hate something but please, be objective and rational.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          4 months ago

          Language is descriptive not prescriptive.

          If people use the term “AI” to refer to LLMs, then it’s correct by definition.

              • Sentient Loom@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                So are you saying that a slur (for Black people, for example) is linguistically “correct by definition” ? And it actually describes members of the demographic?

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  A slur is still a word.
                  I know youre trying to trap me in some stupid gotcha, but idk what you think that’d prove.

                  What would you consider “linguistically correct” if not “follows grammar rules and conveys the intended meaning”?

                  If I say something absolutely heinous about your mother, does it stop being valid English just because it is morally reprehensible and fallacious? Of course not.

          • occultist8128@infosec.pub
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            4 months ago

            It’s partially correct but AI don’t always mean it’s LLM. Etymology is important here. Don’t normalize illiteracy.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              4 months ago

              This is how etymology works.

              Do you think all the words we use today meant exactly the same thing 300 years ago?
              No, people used it “incorrectly” and that usage gains popularity, and that makes it correct.

              What you call illiteracy is literally how etymology works.

              • occultist8128@infosec.pub
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                Just to clarify, do you personally agree that LLMs are a subset of AI, with AI being the broader category that includes other technologies beyond LLMs?

                I come from a technical background and have worked in AI to help people and small businesses whether it’s for farming, business decisions, and more. I can’t agree with the view that AI is inherently bad; it’s a valuable tool for many. What’s causing confusion is that ‘AI’ is often used to mean LLMs, which is inaccurate from a technical perspective. My goal is simply to encourage precise language use to avoid misunderstandings. People often misuse words in ways that stray far from their original etymology. For example, in Indonesia, we use the word ‘literally’ as it’s meant — in a literal sense, not figuratively, as it’s often misused in English nowadays. The word ‘literally’ in Indonesian would be translated as ‘secara harfiah,’ and when used, it means exactly as stated. Just like ‘literally,’ words should stay connected to their roots, whether Latin, Greek, or otherwise, as their original meanings give them their true value and purpose.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  Depending on context, jargon and terminology change.
                  In this context, I’d agree that LLMs are a subset tech under the umbrella term “AI”. But in common English discourse, LLM and AI are often used interchangeably. That’s not wrong because correctness is defined by the actual real usage of native speakers of the language.

                  I also come from a tech background. I’m a developer with 15 years experience, and I work for a large company, and my job is currently integrating LLMs and more traditional ML models into our products, because our shareholders think we need to.
                  Specificity is useful in technical contexts, but in these public contexts, almost everyone knows what we’re talking about, so the way we’re using language is fine.

                  You know it’s bad when someone with my username thinks you’re being too pedantic lol. Dont be a language prescriptivist.

          • Sentient Loom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Not really, since “AI” is a pre-existing and MUCH more general term which has been intentionally commandeered by bad actors to mean a particular type of AI.

            AI remains a broader field of study.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              It doesn’t matter what you want, I’m just describing how language works.

              If everyone says a word means a thing, then it means that thing. Words can have multiple meanings.

              • Sentient Loom@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                AI remains a broader field of study, an active field of study which tons of people are invested in, and they use AI to refer to the broader field of study in which they’re professionally invested.

                I’m just describing how language works.

                No you’re not. And you’re not as smart as you think you are.

                If everyone says a word means a thing

                It’s not literally everybody, and you know it, and you also know that LLMs are not the entire actual category of AI.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  That is beyond pedantry.

                  That is how language works. Word definitions are literally just informal consensus agreement. Dictionaries are just descriptions of observed usage. Not literally everyone needs to agree on it.
                  This isn’t some kind of independent conclusion I came to on my own; I used to think like you appear to, but then I watched some explanations from authors and from professional linguists, and they changed my mind about language prescriptivism.

                  If you say “AI” in most contexts, more people will know what you mean than if you say “LLM”. If your goal is communication, then by that measure “AI” is “more correct” (but again, correctness isn’t even applicable here)

            • occultist8128@infosec.pub
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              I completely agree. Using AI to refer specifically to LLMs does reflect the influence of marketing from companies that may not fully represent the broader field of artificial intelligence. Sounds ironic to those who oppose LLM usage might end up sounding like the very bad actors they criticize if they also use the same misleading terms.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                I don’t get to decide if the marketing terms used by the companies I hate end up becoming the common terms.

                If I stubbornly refuse to use the common terms and instead only use the technical terms, then I’m only limiting the reach of my message.

                OpenAI marketing has successfully made LLM one of the definitions of the term AI, and the most common term used to refer to the tech, in public spaces.

                • occultist8128@infosec.pub
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  If I stubbornly refuse to use the common terms and instead only use the technical terms …

                  That’s where your role takes part as someone who knows the correct term. I myself often teach my close ones about tech and its terms in my field. I don’t want to normalize using wrong terms in a technical discussion. It’s just depending on us to teach what’s right or just being comfortable what is already wrong and doing nothing about it. Activists are educators as much as they are advocates.

                • occultist8128@infosec.pub
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  As a non-English main, Deepl is useful for my locals (and for me). It’s just how it’s implemented. Still being open-minded, yeah, the extensive resource usage is bad for the earth tho, wishing there would be optimization.

  • Mostly_Gaming@lemmy.world
    link
    fedilink
    arrow-up
    39
    arrow-down
    8
    ·
    4 months ago

    I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone’s skull in with it. But no one’s putting the hammer in jail.

    • oppy1984@lemdro.id
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      12
      ·
      edit-2
      4 months ago

      Seriously, the AI hate gets old fast. Like you said it’s a tool, gey get over it people.

        • oppy1984@lemdro.id
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          4 months ago

          Edited. That’s what I get for trying to type fast while my dog is heading for the door after doing her business.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      20
      arrow-down
      2
      ·
      4 months ago

      Yeah, except it’s a tool that most people don’t know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.

      So… It’s not a good tool for the average layperson to be using.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        5
        arrow-down
        5
        ·
        4 months ago

        Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you’re a vegan as well.

        Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you’re home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.

        Do you feel you’ve become more stupid? Do you think it’s possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?

        Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them? If we had the measuring instrument needed to actually make categorizations like “People are losing intelligence.” Psychologists wouldn’t still be arguing over the exact definition of intelligence.

        Stop thinking of AI as a boogieman inside people’s heads. It is a machine. People using the machine to achieve a mundane goal, it doesn’t mean the machine created the goal or is responsible for everything wrong with humanity.

        Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they’re using. AI can’t be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don’t even use it, the less you know, the better. If you never engage with AI technology, you’ll believe everything I say about how evil it is.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          That’s some real “guns don’t kill people, people kill people” apologist speak.
          The only way to stop a bad robber Baron using AI is a good robber Baron using AI? C’mon.
          I know that’s not exactly what you said, but it applicable.

          I work with these tools every day, both as a tool my employer wants me to use, and because I’m part of the problem: I integrate LLMs into my company’s products, to make them “smart”. I’m familiar with the tech. This isn’t coming from a place if ignorance where I’ve just been swayed by Luddites due to my lack of exposure.

          When I use these tools I absolutely become temporarily stupider. I get into the rhythm of using it for everything instead of using it selectively.
          But I’m middle aged; which means both that I’ll never be as good with it but also that it’s harder to affect me long term, I’ve already largely finished developing my brain. I only worry that it’ll be a brand new source of misinformation for my generation, but I worry that that (with the escalating attacks on our school system) it’ll result in generations of kids who grow up without having developed certain mental skills related to problem solving, because they’ll have always relied on it to solve their problems.

          I know it’s not the tool’s fault, but when a tool can do easily cause massive accidental harm, it’s easiest to just regulate the tool to curb the harm.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Literacy what? Just let me watch something funny.

          This is like the most pro-illiteracy thing I’ve ever read.

          Do you feel you’ve become more stupid?

          My muscles were weaker until I started training. As it turns out, the modern convenience that allows me to sit around all day doesn’t actually make me stronger by itself.

          It is people that are increasing inequality.

          Yes, what if the billionaires simply chose not to, hm? Have I ever thought of that? Probably not, I’m very stupid.

      • Ceedoestrees@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        And neither does AI? The massive data centers are having negative impacts on local economies, resources and the environment.

        Just like a massive hammer factory, mines for the metals, logging for handles and manufacturing for all the chemicals, paints and varnishes have a negative environmental impact.

        Saying something kills the planet by existing is an extreme hyperbole.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      12
      ·
      edit-2
      4 months ago

      “Guns don’t kill people, people kill people”

      Edit:

      Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)

      We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. “Guns don’t kill people, people do.” But some philosophers have argued that technology can have values built into it that we may not realise.

      The philosopher Don Idhe says tech can open or close possibilities. It’s not just about its function or who controls it. He says technology can provide a framework for action.

      Martin Heidegger was a student of Husserl’s, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don’t even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.

      Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you’re typing on the screen. It’s only when it breaks or it doesn’t do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it’s just the medium through which we experience the world.

      Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don’t experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.

      Now some of you are looking at me like “Bull sh*t. A person using a hammer is just a person using a hammer!” But there might actually be some evidence from neurology to support this.

      If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there’s a visual stimulus near its hand start firing when there’s a stimulus near the end of the rake, too! The monkey’s brain extends its sense of the monkey body to include the tool!

      And now here’s the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.

      A person using a hammer is actually a new subject with its own way of seeing - ‘hammerman.’ That’s how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.

      You think guns don’t kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!

      So if we’re onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.

      I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.

      Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.

      But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        4 months ago

        Bad faith comparison.

        The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That’s literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.

        AI is a tool, not a weapon. This is kind of melodramatic.

          • Ifera@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            6
            ·
            4 months ago

            GenAI is a great tool for devouring text and making practice questions, study guides and summarize, it has been used as a marvelous tool for education and research. Hell, if set properly, you can get it to give you the references and markers on your original data for where to find the answers to the questions on the study guide it made you.

            It is also really good for translation and simplification of complex text. It has its uses.

            But the oversimplification and massive broad specs LLMs have taken, plus lack of proper training for the users, are part of the problem Capitalism is capitalizing on. They don’t care for the consumer’s best interest, they just care for a few extra pennies, even if those are coated in the blood of the innocent. But a lot of people just foam at the mouth when they hear “Ai”.

            • considerealization@lemmy.ca
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              4 months ago

              Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.

              • Ifera@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.

                Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.

                Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.

                However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            6
            ·
            4 months ago

            then you have little understanding of how genai works… the social impact of genai is horrific, but to argue the tool is wholly bad conveys a complete or purposeful misunderstanding of context

            • considerealization@lemmy.ca
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              4 months ago

              I’m not an expert in AI systems, but here is my current thinkging:

              Insofar as ‘GenAI’ is defined as

              AI systems that can generate new content, including text, images, audio, and video, in response to prompts or inputs

              I think this is genuinely bad tech. In my analysis, there are no good use cases for automating this kind of creative activity in the way that the current technology works. I do not mean that all machine assisted generation of content is bad, but just the current tech we are calling GenAI, which is of the nature of “stochastic parrots”.

              I do not think every application of ML is trash. E.g., AI systems like AlphaFold are clearly valuable and important, and in general the application of deep learning to solve particular problems in limited domains is valuable

              Also, if we first have a genuinely sapient AI, then it’s creation would be of a different kind, and I think it would not be inherently degenerative. But that is not the technology under discussion. Applications of symbolic AI to assist in exploring problem spaces, or ML to solve classification problems also seems genuinely useful.

              But, indeed, all the current tech that falls under GenAI is genuinely bad, IMO.

              • Pup Biru@aussie.zone
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                4 months ago

                things like the “patch x out of an image” allows people to express themselves with their own creative works more fully

                text-based genai has myriad purposes that don’t involve wholesale generation of entirely new creative works:

                using it as a natural language parser in low-stakes situation (think like you’re browsing a webpage and want to add an event to the calendar but it just has a paragraph of text that says “next wednesday at xyz”)

                the generative part makes it generically more useful that specialist models (and certainly less accurate most of the time), and people can use them to build novel things on top of rather than be limited to the original intent of the model creator

                everything genai should be used for should be low-stakes: things that humans can check quickly, or doesn’t matter if it’s wrong… because it will be wrong some of the time

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        12
        arrow-down
        3
        ·
        4 months ago

        My skull-crushing hammer that is made to crush skulls and nothing else doesn’t crush skulls, people crush skulls
        In fact, if more people had skull-crushing hammers in their homes, i’m sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          you’re absolutely right!

          the ban on guns in australia has been disastrous! the number of good guys with guns has dropped dramatically and … well, so has the number of bad guys … but that’s a mirage! ignore our near 0 gun deaths… that’s a statistical anomaly!

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 months ago

          as an aussie, yeah, then you should stop people from having guns

          i honestly wouldn’t be surprised if the total number of gun deaths in australia since we banned guns (1996) was less than the number of gun deaths in the US THIS WEEK

          the reason is irrelevant: the cause is obvious… and id have bought the “to stop a tyrannical government” argument a few years ago, but ffs there’s all the kids dying in school and none of the stop the tyrant, so maybe that’s a fucking awful argument and we have it right down under

          • Kintarian@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            4 months ago

            I’ve never understood how a redneck prepper thinks he’s going to protect himself with a bunch of guns from a government that has millions of soldiers, tanks, machine guns, sidewinder misses and nuclear weapons.

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.

        One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?

        But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.

        Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    38
    arrow-down
    8
    ·
    4 months ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.