sabreW4K3@lazysoci.al to Microblog Memes@lemmy.worldEnglish · 7 months agoSave The Planetlazysoci.alimagemessage-square271fedilinkarrow-up12.13Karrow-down137cross-posted to: lemmydirectory@lemmy.dbzer0.comfuck_ai@lemmy.world
arrow-up12.09Karrow-down1imageSave The Planetlazysoci.alsabreW4K3@lazysoci.al to Microblog Memes@lemmy.worldEnglish · 7 months agomessage-square271fedilinkcross-posted to: lemmydirectory@lemmy.dbzer0.comfuck_ai@lemmy.world
minus-squareJakeroxs@sh.itjust.workslinkfedilinkEnglisharrow-up1·7 months agoBecause they want the majority of the new chips for training models, not running the existing ones would be my assertion. Two different use cases
minus-squareFooBarrington@lemmy.worldlinkfedilinkEnglisharrow-up1·7 months agoSure, and that’s why many cloud providers - even ones that don’t train their own models - are only slowly onboarding new customers onto bigger models. Sure. Makes total sense.
minus-squareJakeroxs@sh.itjust.workslinkfedilinkEnglisharrow-up1·7 months agoI mean do you actually know or are you just assuming?
Because they want the majority of the new chips for training models, not running the existing ones would be my assertion. Two different use cases
Sure, and that’s why many cloud providers - even ones that don’t train their own models - are only slowly onboarding new customers onto bigger models. Sure. Makes total sense.
I mean do you actually know or are you just assuming?
I know.