• inclementimmigrant@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    2 days ago

    This is absolutely 3dfx level of screwing over consumers and all about just faking frames to get their “performance”.

    • Breve@pawb.social
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 days ago

      They aren’t making graphics cards anymore, they’re making AI processors that happen to do graphics using AI.

        • Breve@pawb.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          2 days ago

          Oh yeah for sure, I’ve run Llama 3.2 on my RTX 4080 and it struggles but it’s not obnoxiously slow. I think they are betting more software will ship with integrated LLMs that run locally on users PCs instead of relying on cloud compute.

      • daddy32@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Except you cannot use them for AI commercially, or at least in data center setting.

        • Breve@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Data centres want the even beefier cards anyhow, but I think nVidia envisions everyone running local LLMs on their PCs because it will be integrated into software instead of relying on cloud compute. My RTX 4080 can struggle through Llama 3.2.