Source (Bluesky)

Transcript

recently my friend’s comics professor told her that it’s acceptable to use gen Al for script- writing but not for art, since a machine can’t generate meaningful artistic work. meanwhile, my sister’s screenwriting professor said that they can use gen Al for concept art and visualization, but that it won’t be able to generate a script that’s any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It’s only ever the jobs we’re unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don’t.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    Orrr…hear me out, this is gonna sound wild… Or we don’t believe that this debate is even one we need to have until we have actual fucking AI, which machine learning slop IS NOT. And seeing the kind of morons hyping “AI”, chances are, mankind will never develop true AI because the funding goes to the morons screaming loudest, instead of actual experts slash scientists.

  • boogiebored@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    I just focus on the parts of what I do know that AI can help me with, not try to say AI can replace other people, but not me. That’s some dumb shit.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    So basically all these teachers are myopic assholes, is what I’m reading.

    AI is just… its a broken mirror, a poisoned forbidden fruit.

    It just brings out the worst in everyone and everything.

  • Endymion_Mallorn@kbin.melroy.org
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    Being honest, I don’t like using AI for much of anything. I have been encouraged to use it at work, but aside from rubber-ducking with it to plan out my own strategies, it’s useless.

    At home, it’s a chatbot. I initially used it like I use random names or locations for writing and RPGs. Now, I stick to Donjon and a few others. The biggest thing I ever had it successfully do was help me construct puns I couldn’t quite figure.

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      50
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Well, they do have the one job that actually can be replaced by “AI” (though in most cases it’d be more beneficial to just eliminate it altogether).

      • fartographer@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        2 months ago

        Which is acting like they know everything about everyone else’s jobs, while making up wholly inaccurate assumptions

  • BotsRuinedEverything@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I am 100% positive ai cannot take my job or replace me. In related news, I’m the only person in the world who makes a very specific thing.

  • llama@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    AI absolutely can be used for the work they know best, it’s just that the individual using it will be the only one who knows how to use it correctly and everyone else will just be making slop.

    • mirshafie@europe.pub
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      2 months ago

      Exactly. There’s a Dunning-Kreuger effect here where if you don’t know what you’re doing, it looks like AI just miraculously does what you would have wanted it to do if you were smart enough to craft a good prompt.

      But if you know what you’re doing, you know what tasks it is worth using AI for, and then you craft a good prompt, get a lot of valuable processing done by the AI, and then review, fine-tune and polish the rest.

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 months ago

        Funny enough even that second part involves a lot of chaos out due to Dunning-Kreuger effect yet again.

        I had 4 different friends go “I am experienced enough in programming, I can use it responsibly” during group projects with some paraphrasing at my uni.

        For context those friends ranged from juniors to freshman. We were pretty skilled but NOWHERE near that experienced.

      • brianary@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        It’s Dunning-Kreuger, so when you understand the complexities within the area of your expertise you’ll doubt the likelihood of effective automation using statistical brute force.

        • mirshafie@europe.pub
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I’m not going to argue that take on “Fuck AI” but come on man, that’s just not reasonable.

          • brianary@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            What’s unreasonable about doubting automation for topics that a person has enough knowledge about to doubt their own mastery?

  • AeonFelis@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    2 months ago

    Hot take: it’s reasonable for a comics student to use AI for script-writing and for a screenwriting student to use AI for concept art, not because machine can generate meaningful artistic work at these fields but because these are not the fields they are trying to learn.

    In a way, this can be used to level the field. The comics professor can use the same LLM to generate scripts for all their students. It’ll be slop script, but the slop will be of uniform quality so no student will have the advantage of better writing and it’d be easier to judge their work based on the drawing alone.

    And even if AI could generate true art in some field - why would it be acceptable for a student to use it for the very field they are studying and need to polish their own skills at?

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Yeah, the comics professor is to grade the visuals, and the text is filler, could be lorem ipsum for all they care. Simlarly a screenwriter using AI to storyboard seems fine as it’s not the core product.

      The ideal would be cross-discipline projects bringing students together similar to how they would be expected to deal in the real world, but when individual assignments call for ‘filler’ content to stand in for one of those other disciplines, I think I could accept LLM as a reasonable compromise. I would expect some assignments to ask the students to go beyond their core discipline for some perspective and LLM be bad for that, but I could see a place for skipping the irrelevant complementary pieces of a good chunk of assignments.

  • orbitz@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    I’m a programmer I think both are an art and can’t be replicated by ai well. Sure you get an acceptable pic, you may get something written well (okay stretching more here I haven’t read anything by ai that make me think that but it’s been minimal so giving some leeway), but human art is its own quality.

    Just reminded me of a bit of the Dune novels, people were putting rocks out to be sandblasted by a dust storm and selling as art. I guess I agree with Duncan on that.

    Wait art needs the emotion bit, huh probably to mean more than generated stuff. Another realization but good to understand. I do think the human component is necessary… Until it isn’t but not today.

  • Zachariah@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    2 months ago

    and all the things we aren’t experts in, we’re unqualified to be the evaluators of the AI’s output

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      And that’s exactly why ai is not as useful as people think. if you don’t know something, you can’t evaluate the correctness of the output, and if you know something, why bother using ai?

    • purplemonkeymad@programming.dev
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      2 months ago

      Generalists can be really good at getting stuff done. They can quickly identify the experts needed when it’s beyond thier scope. Unfortunately over confident generalists tend not to get the experts in to help.

      • wabasso@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        This makes a lot of sense. A good lesson even outside the context of AI.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    2 months ago

    let’s not confuse LLMs, AI, and automation.

    AI flies planes when the pilots are unconscious.

    automation does menial repetitive tasks.

    LLMs support fascism and destroy economies, ecologies, and societies.

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I’d even go a step further and say your last point is about generative LLMs, since text classification and sentiment analysis are also pretty benign.

      It’s tricky because we’re having a social conversation about something that’s been mislabeled, and the label has been misused dozens of times as well.

      It’s like trying to talk about knife safety when you only have the word “pointy”.

      • GreenKnight23@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        It’s like trying to talk about knife safety when you only have the word “pointy”.

        holy shit yes! it’s almost like the corpos did it that way so they can just move the goalposts when the bubble pops.

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I generally assume intent that’s more shallow if it’s just as explanatory. It’s the same reason home appliances occasionally get a burst of AI labeling. “Artificial intelligence” sounds better in advertising than “interpolated multi variable lookup table”.
          It’s a type of simple AI (measure water filth from an initial rinse, dry weight, soaked weight, and post spin weight, then find the average for the settings from preprogrammed values.), but it’s still AI.

          Biggest reason I think it’s advertising instead of something more deliberate is because this has happened before. There’s some advance in the field, people think AI has allure again and so everything gets labeled that way. Eventually people realize it’s not the be all end all and decide that it’s not AI, it “just” a pile of math that helps you do something. Then it becomes ubiquitous and people think the notion of calling autocorrect AI is laughable.

    • Javi@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      My favourite ‘will one day be pub trivia’ snippet from this whole LLM mess, is that society had to create a new term for AI (AGI), because LLMs muddied the once accurate term.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        To be fair, AI was still underwhelming compared to what people imagined AI to be, it’s just that LLM essentially swore up and down that this is the AI they had been waiting for, and that moved the goalposts to have to classifiy ‘AGI’ specifically.

  • Tattorack@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    The only good AI I’ve come across is the one I use for denoising cycles renders in Blender3D, as that’s something that a human cannot reasonably do.

    That’s the only scenario something like AI has any use as a “tool”; doing things humans cannot reasonably do.

      • Tattorack@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Here’s another for you:

        Play around with sample sizes and render tile sizes in the performance menu (same place where you find the denoising options).

        Depending on your set-up, you can see a drastic improvement in render times by choosing smaller tile sizes. Sample size is also counted per render tile, so you could get away with very low sample sizes and have a completed render with an overall higher combined sample size.

        • Tom Arrr@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Did not know that about the tile size. I never worried to much about performance as my previous laptop had plenty of grunt. But that blew up and now I’m on a 10 year old machine that was tired when it was new. I need everything I can get 😆

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 months ago

      Good news. There’s absolutely zero “intelligence” involved in computer functions doing math. No “AI” needed or detected as usual.

      • Tattorack@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        Yes, I know. It’s just a neural net. Still a calculator, just with more steps than a human can sift through. And they call that intelligence.