Just as accurate as the leading brand!

  • maria [she/her]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 days ago

    woag - hmmmmm but that looks like a random probability distrobution of just a few tokens… hmmm…,. best case we have it walk thru an octree of all tokens of our own tokenizer and pick the correct ones progressively.

    orrrr -… iduno… ummmm… hmm-…,…,.

    see - we cant really make… agents happen with this, u see? if we wanted random samples, wed use som semi-random number generator and hook it up to the current time… LMs r time-independant and generate deterministic outcomes using a given text prompt and seed - but your model doesnt have that… iguess ur zero-electricity point is quite interesting tho… hmm-

    we couuuuuld have it hooked up to a visual interpreter, shich sees the balls outcome and then picks from a set of predefined sentence blocks, to then create new sentences based off of that. now we r at the problem from earlier tho: we only got that many output states with the die inside.

    so we could - theoretically - have this model navigate a given environment using code outout patterns instead of sentences - so… maybee this could be interesting…

      • maria [she/her]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        yea - an octree - se tree structure where u walk foraard and find “more detail” along the path. LMs essentially do the same, but each branching point is a token and each token has about 30.000 different possibilies - for different branches (thats the models “token vocabulary”) so yea-… “octrees”… sure exist.

        anyway - have a nice day ~ ~ ~<3