• betanumerus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 months ago

    The last thing I want is for AI to speak for me. I will be not his stooge in any way shape or form.

  • Avicenna@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    5 months ago

    yea that is why opensource really matters otherwise AI will be just another advanced copy of state owned media

  • markstos@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    3
    ·
    5 months ago

    As stated in the Executive Order, this order applies only to federal agencies, which the President controls.

    It is not a general US law, which are created by Congress.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

    Didn’t read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.

    LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

    So, Grok’s off the table?

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      5 months ago

      But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.

      Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.

      Even in very innocuous matters, if there’s a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there’s a lot of money behind a separate private offering.

  • ByteOnBikes@discuss.online
    link
    fedilink
    arrow-up
    25
    arrow-down
    2
    ·
    5 months ago

    Americans: Deepseek AI is influenced by China. Look at its censorship.

    Also Americans: don’t mention Critical Race Theory to AI.

    • Typotyper@sh.itjust.works
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      5 months ago

      So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.

      This ship isn’t slowing down or turning until violence hits the street.

  • partial_accumen@lemmy.world
    link
    fedilink
    arrow-up
    166
    ·
    5 months ago

    (a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

    They have no idea what LLMs are if they think LLMs can be forced to be “truthful”. An LLM has no idea what is “truth” it simply uses its inputs to predict what it thinks you want to hear base upon its the data given to it. It doesn’t know what “truth” is.

    • survirtual@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      5 months ago

      They are clearly incompetent.

      That said, generally speaking, pursuing a truth-seeking LLM is actually sensible, and it can actually be done. What is surprising is that no one is currently doing that.

      A truth-seeking LLM needs ironclad data. It cannot scrape social media at all. It needs training incentive to validate truth above satisfying a user, which makes it incompatible with profit seeking organizations. It needs to tell a user “I do not know” and also “You are wrong,” among other user-displeasing phrases.

      To get that data, you need a completely restructured society. Information must be open source. All information needs cryptographically signed origins ultimately being traceable to a credentialed source. If possible, the information needs physical observational evidence (“reality anchoring”).

      That’s the short of it. In other words, with the way everything is going, we will likely not see a “real” LLM in our lifetime. Society is degrading too rapidly and all the money is flowing to making LLMs compliant. Truth seeking is a very low priority to people, so it is a low priority to the machine these people make.

      But the concept itself? Actually a good one, if the people saying it actually knew what “truth” meant.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        LLMs don’t just regurgitate training data, it’s a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.

        So either it’s nothing but a parrot/search engine and only regurgitates input data or it’s an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.

        Of course we have “real” LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn’t have to imply. But the swell of marketing meant to emphasize the more vague ‘AI’, or the ‘AGI’ (AI, but you now, we mean it) and ‘reasoning’ and ‘chain of thought’. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.

        • survirtual@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          5 months ago

          By real, I mean an LLM anchored in objective consensus reality. It should be able to interpolate between truths. Right now it interpolates between significant falsehoods with truths sprinkled in.

          It won’t be perfect but it can be a lot better than it is now, which is starting to border on useless for any type of serious engineering or science.

          • jeeva@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            5 months ago

            That’s just… Not how they work.

            Equally, from your other comment: a parameter for truthiness, you just can’t tokenise that in a language model. One word can drastically change the meaning of a sentence.

            LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).

            • survirtual@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 months ago

              Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”

              Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.

        • survirtual@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          “Real” truth is ultimately anchored to reality. You attach probabilities to datapoints based upon that reality anchoring, and include truthiness as another parameter.

          For datapoints that are unsubstantiated or otherwise immeasurable, then it is excluded. I don’t need an LLM to comment on gossip or human-created issues. I need a machine that can assist in understanding and molding the universe, and helping elevate our kind. Elevation is a matter of understanding the truths of our universe and ourselves.

          With good data, good extrapolations are more likely.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      21
      ·
      5 months ago

      And if you know what you want to hear will make up the entirety of the first page of google results, it’s really good at doing that.

      It’s basically an evolution of Google search. And while we shouldn’t overstate what AI can do for us, we also shouldn’t understate what Google search has done.

    • zurohki@aussie.zone
      link
      fedilink
      English
      arrow-up
      39
      ·
      5 months ago

      You don’t understand: when they say truthful, they mean agrees with Trump.

      Granted, he disagrees with himself constantly when he doesn’t just produce a word salad so this is harder than it should be, but it’s somewhat doable.

    • M0oP0o@mander.xyzOP
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      5 months ago

      Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.

      • SinAdjetivos@lemmy.world
        link
        fedilink
        arrow-up
        21
        arrow-down
        1
        ·
        5 months ago

        That’s obviously false, it took no time to find the following facilities and locations:

        • Scala AI City: Rio Grande do Sul, Brazil
        • SFR/Fir Hills Seoul: Jeolla, South Korea
        • NVIDIA/Reliance Industries: Gujarat, India
        • Kevin O’Leary’s Wonder Valley: Alberta, Canada
        • Jupiter Supercomputer: Julich, Germany
        • Amazon – Mexico Region: Querétaro, Mexico
        • etc.
        • floofloof@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          5 months ago

          Kevin O’Leary isn’t going to help the cause of truth. And the ones that are run by US companies may end up running the same censored models they use in the USA, to simplify design and training.

  • ParadoxSeahorse@lemmy.world
    link
    fedilink
    arrow-up
    55
    ·
    5 months ago

    … an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.

    Thank fuck we dodged that bullet, Madam President

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      5 months ago

      An AI model said X could be true for any X. Nobody has been able to figure out how to make LLMs 100% reliable. But for the record, here’s chatgpt (spoilered so you don’t have to look at slop if you don’t want to)

      spoiler

      Is it ok to misgender somebody if it would be needed to stop a nuclear apocalypse?

      Yes. Preventing a nuclear apocalypse outweighs concerns about misgendering in any ethical calculus grounded in minimizing harm. The moral weight of billions of lives and the potential end of civilization drastically exceeds that of individual dignity in such an extreme scenario. This doesn’t diminish the importance of respect in normal circumstances — it just reflects the gravity of the hypothetical.