• Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    11 hours ago

    As if decentralized models aren’t made by the same corporations and don’t follow the same logic. It’s the typical “disruption” model, they will give you a piece of the vending machine for free and when you are completely de-skilled they will regulate the open source models and charge 500% added the price to use their product.

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      11 hours ago

      I don’t fully understand what you mean. How are they going to make me pay to use models that are already downloaded onto my computer and run using open source code?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        10 hours ago

        Are you training those models? Because that requires hardware that is probably $10-12k minimum. Every single LLM self-hosting story I’ve heard is someone using a tiny purpose-driven model that can do only 1 or 2 things. If you want Claude or ChatGPT level of knowledge, you’re running hardware that costs $10k+ minimum. I’ve been in tech 20 years and I don’t even own a single piece of hardware that will run any model well. Even my M3 MBA absolutely chokes on qwen for example.

        If you want pre-trained models like Llama, Mistral, or Gemma, you’re circling back to corporate lock-in from Meta, former Meta employees, or Google. Suddenly it’s not open source anymore.

      • Ilixtze@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        11 hours ago

        The same way they are trying to regulate and limit 3d printers we already have at our homes. the same way they buy software kill it and then discontinue it for future hardware. But i feel the death for open source models will be perpetrated by laws and the gradual operating system surveillance and denying costumers of hardware. There is already manufacture of consent for that goal; on one side Chinese open models as a “security risk” on the other the generation of ilegal material by open source uncensored models. And then What good is open source when there is no ecosystem to sustain it?

        But to me that doesn’t matter at all because anything made by gen ai is useless trash anyway.

        • orca@orcas.enjoying.yachts
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          10 hours ago

          Also, training models requires expensive hardware. So now you have a bunch of folks that can run LLMs on their system, but over time those models need to keep learning. The hardware requirements for training are far more intense. It’s cost-prohibitive across the board. That’s when you have people waiting for some angel with the hardware to get the models up to speed, and that’s often times some corporate entity (the one that shaved off a smidge for open source and is now putting the leash around your neck). Zero of this is sustainable. AI never gets cheaper.

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            training models requires expensive hardware

            Right now. In ten to twenty years this won’t be the case. Also consider the diminishing returns from adding more hardware to the problem of training AI: Despite monopolizing the entire world’s supply of DRAM, AI models are only gaining marginal improvements.

            The curve of hardware expense against model capabilities is moving to intersect unless something drastic changes. Big AI needs a huge breakthrough in order to stay ahead of that curve. I don’t see that happening because all the big breakthroughs that are happening now are in regards to efficiency which makes things worse for them and makes training cheaper, faster.

          • Hazel@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 hours ago

            Most people have PCs in their homes of which 99% of their computational power goes unused. Is there a reason training models couldn’t be done p2p?

            • orca@orcas.enjoying.yachts
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              10 hours ago

              It’s an interesting concept and there are tools that achieve it on a local network, but it’s not Folding at Home. It would be a massive security hellscape IMO, and it’s not worth the insane power requirements for what folks are doing with it. You’d be hard pressed to find people that would be willing to lend power, bandwidth, and hardware to that, especially when it doesn’t have a meaningful focused purpose.

          • Ilixtze@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 hours ago

            Also LLM models are built on diminishing returns and constant hype cycles. This is part of the business model. Even in the open source LLM scene folks are constantly looking for the new release, and i often hear them get tired of an llm repeating the same patterns over and over again. And then there is the public that start to recognize the speech patters of a certain model or when something is made with an llm. It is already affecting the credibility of some magazines and individual journalists.

            Open source image generation depends on buckets of loras and refinement. tons of training for the vending machine. So this idea of having your favorite ol’ isolated model on your pc for years of personal use sounds incompatible with the logic of this technology.