irelephant [he/him]@programming.devM to iiiiiiitttttttttttt@programming.devEnglish · 5 个月前We put the Thing That Can't Do Numbers™ in your spreadsheetsprogramming.devimagemessage-square99fedilinkarrow-up1855arrow-down18cross-posted to: fuck_ai@lemmy.world
arrow-up1847arrow-down1imageWe put the Thing That Can't Do Numbers™ in your spreadsheetsprogramming.devirelephant [he/him]@programming.devM to iiiiiiitttttttttttt@programming.devEnglish · 5 个月前message-square99fedilinkcross-posted to: fuck_ai@lemmy.world
minus-squareThe Ramen Dutchman@ttrpg.networklinkfedilinkEnglisharrow-up1·edit-25 个月前 It’s less accurate and doesn’t risk hallucinating I might be mistaken, but don’t these two lines mean the exact opposite in this context? Is AI more often right, or more often wrong?
minus-squareT156@lemmy.worldlinkfedilinkEnglisharrow-up1·5 个月前Both, because the way it’s right and wrong are different. Sentiment analysis might misclassify some of the data, but it doesn’t risk making things up wholescale like an LLM would.
I might be mistaken, but don’t these two lines mean the exact opposite in this context?
Is AI more often right, or more often wrong?
Both, because the way it’s right and wrong are different.
Sentiment analysis might misclassify some of the data, but it doesn’t risk making things up wholescale like an LLM would.