• hayvan@feddit.nl
    link
    fedilink
    English
    arrow-up
    106
    arrow-down
    1
    ·
    1 day ago

    AI even ruined AI. Up until this insane hype train, ML models were specialized tools to achieve their tasks. Now the whole field is dominated by LLMs and slopgen bullshit.

    • chronicledmonocle@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      2
      ·
      1 day ago

      Yeah that’s the annoying thing. Generative AI is actually really useful…in SPECIFIC situations. Discovering new battery tech, new medicines, etc. are all good use cases because it’s basically a parrot and blender combined and most of these things are rehashes if existing technologies in new and novel ways.

      It is not a fucking good solution for a search engine replacement to ask “Why do farts smell?”. It uses way too much energy for that and it hallucinates bullshit.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        It’s good for optimisation problems, where you have a complex high-dimensional space to search and you’re solving for some measurable quality.

      • chiliedogg@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        1 day ago

        Yeah. They solved protien folding with ML a few years back. And I like using it for things like noise removal in Lightroom.

        But so much of it has been focused on useless (at best) bullshit that I just want the bubble to burst already.

        • piconaut@lemmy.ca
          link
          fedilink
          English
          arrow-up
          19
          ·
          1 day ago

          I agree with the general sentiment here but just wanted to clarify that they definitely didn’t “solve protein folding” yet. Alpha fold is a significant improvement in structure prediction and it generated a lot of hype but some of the structures I’ve seen it put out are total nonsense.

    • BillyTheKid@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      A lot of top researchers have already moved on from transformers.

      Yann LeCun, Meta’s longtime chief AI scientist, quit and said LLMs are a “dead end” because scaling text-only models can’t produce real intelligence, and he’s not the only one who thinks so. Lots of engineers understand the limitations of LLMs.