• Antsan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don’t know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that’s known to be an unsolved problem, it’ll tell you so — but ask it about anything obscure, and it’ll come up with some plausible-sounding bullshit.

    And I think that’s a failure to recognize the boundary of what it knows vs what it doesn’t.

    • dynomight@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      I think this is a fair argument. Current AIs are quite bad about “knowing if they know”. I think it’s likely that we can/will solve this problem, but I don’t have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.