What LLMs revealed is how many people in our industry don’t like to code.
It’s intriguing that now they claim and showcase what they “built with Claude”, whereas usually that means they generated a PoC.
It’s funny, as people still focus on how they’re building, so it’s all about the code. And if that’s the message sent outside, together with the thought that LLMs are already better than “average coder Joe”, then the logical follow-up question is: why do we need those humans in the loop?


For what it’s worth, when they eventually raise the prices on it, you’d be the one not losing money for every line you write.
Except you can already download and run models on your local machine for free with ollama. Price raising might at least calm the AI craze with the normies though. Probably not with developers who know how to run LLMs locally.
Quality of output differs by a lot for local models, but I also think that local should be the way forward
*scratches neck …yall got anymore of them tokens?