themachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-219 天前Dell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comexternal-linkmessage-square18fedilinkarrow-up1419arrow-down13cross-posted to: technology@lemmy.zip
arrow-up1416arrow-down1external-linkDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comthemachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-219 天前message-square18fedilinkcross-posted to: technology@lemmy.zip
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up1·19 天前Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
minus-squareFermiverse@gehirneimer.delinkfedilinkarrow-up1·19 天前https://github.com/patientx/ComfyUI-Zluda Works with the 395+
minus-squareL_Acacia@lemmy.mllinkfedilinkEnglisharrow-up2·18 天前The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
minus-squareSuspciousCarrot78@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-218 天前NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
https://github.com/patientx/ComfyUI-Zluda
Works with the 395+
The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.