Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

  • CileTheSane@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    An LLM has no knowledge.

    My calculator does not “know” that 2+2=4, it runs the code it has been programmed with which tells it to output 4. It has no knowledge or understanding of what it’s being asked to do, it just does what it is programmed to do.

    An LLM is programmed to guess what a human would say if asked who the 4th president of the United States was. It runs the code that was developed with the training data to output the most likely response. Is it true? Doesn’t matter. All that matters is that it sounds like something a human would say.

    I trust the knowledge of my calculator more, because it was designed to give factual correct responses.

    • Not_mikey@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      How do you know that George Washington is the first president? You weren’t around in 1784, you have no experiential knowledge, you only have declarative knowledge of it, you read it from a book or heard it from a person enough to repeat the fact when asked. You are guessing what your history teacher would have said in elementary school. Declaritive knowledge is just memory and repetition, and an LLM can do memory and repetition.

      Whether an LLM can determine truth depends on your definition of truth. If truth can only be obtained from experience and reasoning from first principles then an LLM can’t determine truth. Then a statement like George Washington was the first president can’t be true then because you can’t derive it from experience or first principles, you weren’t there, no one alive was there. George Washington was the first president derives it’s validity and truth from the consensus of trustworthy people who say it’s true. An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Whether an LLM can determine truth depends on your definition of truth

        Of course someone who doesn’t believe “truth” exists thinks LLMs are just fine. You have to not believe things can be true in order to find their output acceptable.

        An LLM can derive this sort of truth by determining the consensus of its training data assuming its training data is from trustworthy sources or the more trustworthy sources are more reinforced.

        Every week I see a new post of an LLM being blantly wrong. LLMs said to add glue to pizza to make the cheese stick together.

        “They have improved the models since then…” Last week the American military used “AI” and it targeted a school as a military structure. The models are full of shit, they just manually remove the blantly incorrect shit whenever they make the rounds, and there’s always more blantly incorrect shit to be found.

        • Not_mikey@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          I never said I don’t believe in truth, I said there are different definitions of truth and different kinds of truth, the study of this is called epistemology and I’d encourage you to look into it to better understand truth. I believe in truth derived from experience, and reasoning from first principles, 2+2=4 is true, I had coffee this morning is true. For things outside of my direct experience or that can’t be reasoned I accept that truth can be derived from trustworthy external sources. Therefore Washington was the first president is true because I’ve heard it many times from multiple trustworthy sources.

          The question is whether you believe truth can be derived from external sources or are you a Cartesian skeptic? It doesn’t seem like it because that sort of worldview is very limiting. The question remains how do you know that Washington was the first president? Or even better how do you know that an LLM said to put glue on pizza? You never experienced it giving that answer, you got the idea from another source, maybe you saw a picture that could’ve easily been edited. The truth of that idea can only be derived from the trustworthiness of that source.

          LLMs can’t know everything, again they have good declaritive knowledge but they completely lack experiential knowledge and struggle with reasoning. Knowing not to put glue on pizza is knowledge gained from experience: glue tastes bad and is usually inedible, and reasoning: therefore adding glue to pizza will make it taste bad and be inedible.

          Every day you also probably see a new post of humans being blatantly wrong, does that mean humans can’t know things? No it just means humans have a limited area of knowledge. Same with an LLM, it can know that Washington was the first president while not knowing to not put glue on pizza, so you have to be careful what you ask it, just like when you ask human something outside their area of expertise.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            they have good declaritive knowledge

            No. They don’t. They are good at making declarative statements.
            That’s not the same thing.

            Every day you also probably see a new post of humans being blatantly wrong, does that mean humans can’t know things?

            I fully agree that asking a random human for help with something is just as effective as asking an LLM to help with something.

            If I need to know something (like who was the first president of the United States) I will not go outside and ask a random human, I will ask a trustworthy source.
            If I need some code written I won’t have a random human do it, I will interview people to find someone capable.
            If I need someone to interact with customers I won’t let some random human come in and do it.

            • Not_mikey@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              16 hours ago

              They are good at making declarative statements.
              That’s not the same thing.

              What’s the difference between making correct declaritive statements and having declaritive knowledge? If I am able to accurately state every president of the US, wouldn’t you say I have knowledge of the list of US presidents? The only way you can judge my declaritive knowledge of something is by my ability to make accurate declaritive statements, that’s what a test is. If making accurate declaritive statements is not the measure of declaritive knowledge then what is?

              An LLM will give more accurate declaritive statements on more question then any human can, would that not mean that an LLM has more declaritive knowledge than any human? So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?

              • CileTheSane@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 hours ago

                An LLM will give more accurate declaritive statements on more question then any human can

                Not if you include “I don’t know” as an accurate statement or penalize the score for incorrect declarative statements.

                So is it not more trustworthy for giving declaritive statements than any random human? Would you not trust an LLMs answer on who the 4th president is over a random human?

                I would absolutely trust the random human more because they’re not going to make shit up if they don’t know. It will either be “I don’t know” or “I would guess” to make it clear they aren’t confident. The LLM will give me a declarative answer but I have no fucking clue if it’s accurate or an “hallucination” (lie). I’ll need to do what I should have done in the first place and ask a search engine to make sure.

                • Not_mikey@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 hours ago

                  I think you are underestimating how accurate LLMs are because you probably don’t use them much, and only see there mistakes posted for memes. No one’s going to post the 99 times an LLM gives the correct answer, but the one time it says to put glue on pizza it’s going to go viral. So if your only view on LLM output is from posts, you’re going to think it’s way worse than it is.

                  Even if you mark it down for incorrect answers it’s still going to beat most people. An LLM can score in the 90th percentile in the SAT, and around the 80th percentile in the LSAT. If you take into account that people taking those tests are more prepared for them then the general population they’re probably in the 99th percentile. It doesn’t matter if you mark wrong answers negative if it’s getting 95% of the answers correctly and your average percent is getting 50% of the answers correctly.

                  People guess things too and will also state things confidently that they don’t completely know. If a person has a little bit of knowledge on a subject they are likely to give confidently wrong answers due to the dunning Krueger effect. If you pick a random person you’re probably just as likely to get one of these people as you are that the LLM is wrong. So is it more useful to ask something that has a 95% chance to be correct, and 5% chance to be confidently wrong, or ask a person who has a 50% chance of being correct, that includes those who guessed correctly, 5% chance of being confidently wrong and a 45% chance of saying I don’t know.

                  If you’re doubting my percentages on the accuracy of LLMs I’d encourage you to test them yourself. See if you can stump it on declaritive knowledge, it’s harder than the posts make it seem.