• Diplomjodler@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    14 days ago

    They’re all aware that the chance of success is minimal. However, the one that actually succeeds will achieve an absolutely dominant position both economically and politically. So they’re basically all gambling their companies on being the one that achieves AGI first while being well aware that failure is the most likely outcome. And at this point they can’t pull out because the consequences would be devastating.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      14
      ·
      14 days ago

      Going all in on a tech fundamentally incapable of achieving AGI is a bit dumb, to put it mildly.

      • Diplomjodler@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        6
        ·
        14 days ago

        We don’t know that yet. While the technology in its current state won’t cut it, there can always be some new breakthrough, that moves the field on.

        • Ech@lemmy.ca
          link
          fedilink
          arrow-up
          8
          ·
          14 days ago

          There is no breakthrough that will make an llm sentient, and dumping the world’s supply of RAM into it 1) doesn’t amount to a “breakthrough”, and 2) will never turn llms into something they’re not.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 days ago

      What you’re saying makes sense when we’re talking about Google or OpenAI or something, but this was a Japanese font company. I really doubt they were chasing AGI — more likely they were using generative AI to make slop fonts and failing miserably.