• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    25 days ago

    Biological neurons are actually more digital than artificial neural nets are. They fire with equal intensity, or don’t fire (that at least is well understood). Meanwhile, a node in your LLM has an approximately continuous range of activations.

    They’re just tracking weighted averages about what word comes next.

    That’s leaving out most of the actual complexity. There’s gigabytes or terabytes of mysterious numbers playing off of each other to decide the probabilities of each word in an LLM, and it’s looking at quite a bit of previous context. A human author also has to decide the next word to type repeatedly, so it doesn’t really preclude much.

    If you just go word-by-word or few-words-by-few-words straightforwardly, that’s called a Markov chain, and they rarely get basic grammar right.

    Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

    Sure, we agree on that. Where we maybe disagree is on whether humans experience the same kind of tradeoff. And then we got a bit into unrelated philosophy of mind.

    and you can literally program an LLM inside a fax machine if you wanted to.

    Absolutely, although it’d have to be more of an SLM to fit. You don’t think the exact hardware used is important though, do you? Our own brains don’t exactly look like much.

    • yeahiknow3@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 days ago

      Biological neurons are actually more digital than artificial neural nets are.

      There are three types of computers.

      1. Digital
      2. Analog
      3. Quantum

      Digital means reducible to a Turing machine. Analog, which includes things like flowers and cats, means irreducible by definition. (Otherwise, they would be digital.)

      Brains are analog computers (maybe with some quantum components we don’t understand).

      Making a mathematical model of an analog computer is like taking a digital picture of a flower. That picture is not the same as the flower. It won’t work the same way. It will not produce nectar, for instance, or perform photosynthesis.

      Everything about how a neuron works is completely undigitizable. There’s integration at the axon hillock; there are gooey vesicles full of neurotransmitters whose expression is chemically mediated, dumped into a synaptic cleft of constantly variegated width and brownian motion to activate receptors whose binding affinity isn’t even consistent. The best we can do is build mathematical models that sort of predict what happens next on average.

      These crude neural maps are not themselves engaged in brain activity — the map is not the territory.

      Idk where you got the idea that neurons can be digitized, but someone lied to you.