• kadu@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    5
    ·
    2 months ago

    There’s not a single world where LLMs cure cancer, even if we decided to give the entirety of our energy output and water to a massive server using every GPU ever made to crunch away for months.

    • 🍉 Albert 🍉@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      2 months ago

      which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…

      But LLMs are non of those things. all they can do is look like text.

      LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.

      • BilSabab@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        down in Ukraine we have a dozen or so image analysis projects that can’t catch a break because all investors can think about are either swarm drones (quite understandably) or LLM nothingburgers that burn through money and dissipate every nine months. Meanwhile those image analysis projects manage to progress on what is basically scraps and leftovers.

        • 🍉 Albert 🍉@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 months ago

          the problem is that technical people can understand the value of different AI tool. but tell an executive with a business major how mind blowing it is that a program trained in Go and StarCraft can solve protein folding (studied biology in 2010 and they kept repeating how impossible solving proteins in silico was).

          But a chat bot that tells the executive how smart and special it is?

          That’s the winner.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            2 months ago

            That’s not…

            sigh

            Ok, so just real quick top level…

            Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).

            This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).

            This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

            The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

            There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

            The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.

    • Quetzalcutlass@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 months ago

      And it’s clear we’re nowhere near achieving true AI, because those chasing it have made no moves to define the rights of an artificial intelligence.

      Which means that either they know they’ll never achieve one by following the current path, or that they’re evil sociopaths who are comfortable enslaving a sentient being for profit.

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 months ago

      Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn’t magically pop out, but it’s important to recognise the use cases where NN actually work well.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        2 months ago

        I’m trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called “GPUs” that can no longer even do graphics. They don’t even do floating point calculations anymore, and I’ve heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.

        One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.

        • MotoAsh@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          integer calculations are lossy because they’re integers. There is nothing extra there. Those GPUs have plenty of uses.

          • merc@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            I don’t know too much about it, but from the people that do, these things are ultra specialized and essentially worthless for anything other than AI type work:

            anything post-Volta is literally worse than worthless for any workload that isn’t lossy low-precision matrix bullshit. H200’s can’t achieve the claimed 30TF at FP64, which is a less than 5% gain over the H100. FP32 gains are similarly abysmal. The B100 and B200? <30TF FP64.

            Contrast with AMD Instinct MI200 @ 22TF FP64, and MI325X at 81.72TF for both FP32 and FP64. But 653.7TF for FP16 lossy matrix. More usable by far, but still BAD numbers. VERY bad.

            https://weird.autos/@rootwyrm/115361368946190474

            • MotoAsh@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 months ago

              AI isn’t even the first or the twentieth use case for those operations.

              All the “FP” quotes are about floating point precision, which matters more for training and finely detailed models, especially FP64. Integer based matrix math comes up plenty often in optimized cases, which are becoming more and more the norm, especially with China’s research on shrinking models while retaining accuracy metrics.