Sjmarf@sh.itjust.works to Lemmy Shitpost@lemmy.world · 2 months agoHow to clean a rescued pigeonsh.itjust.worksimagemessage-square128fedilinkarrow-up11.22Karrow-down112
arrow-up11.21Karrow-down1imageHow to clean a rescued pigeonsh.itjust.worksSjmarf@sh.itjust.works to Lemmy Shitpost@lemmy.world · 2 months agomessage-square128fedilink
minus-squareRivalarrival@lemmy.todaylinkfedilinkEnglisharrow-up4arrow-down6·2 months agoYou say this like human “figuring” isn’t some “autocomplete bullshit”.
minus-squareHighlyRegardedArtist@lemmy.worldlinkfedilinkarrow-up4·2 months agoYou can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
minus-squareRivalarrival@lemmy.todaylinkfedilinkEnglisharrow-up2·edit-22 months agoMy point wasn’t that LLMs are capable of reasoning. My point was that the human capacity for reasoning is grossly overrated. The core of human reasoning is simple pattern matching: regurgitating what we have previously observed. That’s what LLMs do well. LLMs are basically at the toddler stage of development, but with an extraordinary vocabulary.
You say this like human “figuring” isn’t some “autocomplete bullshit”.
You can play with words all you like, but that’s not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.
My point wasn’t that LLMs are capable of reasoning. My point was that the human capacity for reasoning is grossly overrated.
The core of human reasoning is simple pattern matching: regurgitating what we have previously observed. That’s what LLMs do well.
LLMs are basically at the toddler stage of development, but with an extraordinary vocabulary.
Here we go…