• ruan@lemmy.eco.br
    link
    fedilink
    arrow-up
    7
    arrow-down
    10
    ·
    edit-2
    9 days ago

    LLMs are a pretty good tool to summarize any subject, or present it with different words or in a different approach… They are a statistical word predictor tool after all.

    So yeah, if you understand that LLMs:

    • don’t possess intelligence;
    • they are just reproducing patterns from the training material used;
    • it’s impossible for them to contain ALL “knowledge” from the training material;
    • the “context” provided directly influences the response

    Then, I’d say that LLMs can be used as a very good facilitator to learn about almost any subject that has already been documented in any word format in almost any language.

    • RawHex@lemmy.ml
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      9 days ago

      LLMs are a pretty good tool to summarize any subject

      They’re not even good at that.

      • xep@discuss.online
        link
        fedilink
        arrow-up
        4
        ·
        9 days ago

        It feels like as the models were iterated on they got worse at it over time. I wonder if it’s because of all the guard-railing and internal censorship etc.

        • RawHex@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          It’s just a messy process, it’s purely based on probability, I don’t think it’ll ever get good, people were just being lied to. Don’t forget that they relied on “confirmatory bias” and “selection bias” from the users as well.