Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It’s particularly good for things like identifying niche python packages & providing example use cases so I don’t have to learn shit loads of syntax that I’ll never use again.
I know how to write a tree traversal, but I don’t need to because there’s a python module that does it. This was already the case before LLMs. Now, I hardly ever need to do a tree traversal, honestly, and I don’t particularly want to go to the trouble of learning how this particular python module needs me to format the input or whatever for the one time this year I’ve needed to do one. I’d rather just have something made for me so I can move on to my primary focus, which is not tree traversals. It’s not about avoiding understanding, it’s about avoiding unnecessary extra work. And I’m not talking about saving the years of work it takes to learn how to code, I’m talking about the 30 minutes of work it would take for me to learn how to use a module I might never use again. If I do, or if there’s a problem I’ll probably do it properly the second time, but why do it now if there’s a tool that can do it for me with minimum fuss?
The usefulness of Stack Overflow or a GPT model completely depends on who is using it and how.
It also depends on who or what is answering the question, and I can’t tell you how many times someone new to SO has been scolded or castigated for needing/wanting help understanding something another user thinks is simple. For all of the faults of GPT models, at least they aren’t outright abusive to novices trying to learn something new for themselves.
I fully expect an LLM trained in Stack Overflow is quiet capable of being just as much of an asshole as a Stack Overflow user.
Joke on the side, whilst I can see that “not going to the trouble of understanding the code you got” is mostly agnostic in terms of the source being Stack Overflow or an LLM (whilst Stack Overflow does naturally have more context around the solution, including other possible solutions, an LLM can be interrogated further to try and get more details), I think only time will tell if using an LLM model ultimately makes for less well informed programmers than being a heavy user of Stack Overflow or not.
What I do think is more certainly, is that figuring out a solution yourself is a much better way to learn that stuff than getting it from an LLM or Stack Overflow, though I can understand that often time is not available for that more time consuming method, plus that method is an investment that will only pay if you get faced with similar problems in the future, so sometimes it’s simply not worth it.
The broader point I made still stands: there is a class of programmers who are copy & paste coders (no idea if the poster I originally replied to is one or not) for whom an LLM is just a faster to query Stack Overflow.
There will always be a class of programmers/people that choose not to interrogate or seek to understand information that is conveyed to them - that doesn’t negate the value provided by tools like Stack Overflow or chatGPT, and I think OP was expressing that value.
Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It’s particularly good for things like identifying niche python packages & providing example use cases so I don’t have to learn shit loads of syntax that I’ll never use again.
In other words, it’s the new version of copying code from Stack Overflow without going to the trouble of properly understanding what it does.
I know how to write a tree traversal, but I don’t need to because there’s a python module that does it. This was already the case before LLMs. Now, I hardly ever need to do a tree traversal, honestly, and I don’t particularly want to go to the trouble of learning how this particular python module needs me to format the input or whatever for the one time this year I’ve needed to do one. I’d rather just have something made for me so I can move on to my primary focus, which is not tree traversals. It’s not about avoiding understanding, it’s about avoiding unnecessary extra work. And I’m not talking about saving the years of work it takes to learn how to code, I’m talking about the 30 minutes of work it would take for me to learn how to use a module I might never use again. If I do, or if there’s a problem I’ll probably do it properly the second time, but why do it now if there’s a tool that can do it for me with minimum fuss?
The usefulness of Stack Overflow or a GPT model completely depends on who is using it and how.
It also depends on who or what is answering the question, and I can’t tell you how many times someone new to SO has been scolded or castigated for needing/wanting help understanding something another user thinks is simple. For all of the faults of GPT models, at least they aren’t outright abusive to novices trying to learn something new for themselves.
I fully expect an LLM trained in Stack Overflow is quiet capable of being just as much of an asshole as a Stack Overflow user.
Joke on the side, whilst I can see that “not going to the trouble of understanding the code you got” is mostly agnostic in terms of the source being Stack Overflow or an LLM (whilst Stack Overflow does naturally have more context around the solution, including other possible solutions, an LLM can be interrogated further to try and get more details), I think only time will tell if using an LLM model ultimately makes for less well informed programmers than being a heavy user of Stack Overflow or not.
What I do think is more certainly, is that figuring out a solution yourself is a much better way to learn that stuff than getting it from an LLM or Stack Overflow, though I can understand that often time is not available for that more time consuming method, plus that method is an investment that will only pay if you get faced with similar problems in the future, so sometimes it’s simply not worth it.
The broader point I made still stands: there is a class of programmers who are copy & paste coders (no idea if the poster I originally replied to is one or not) for whom an LLM is just a faster to query Stack Overflow.
There will always be a class of programmers/people that choose not to interrogate or seek to understand information that is conveyed to them - that doesn’t negate the value provided by tools like Stack Overflow or chatGPT, and I think OP was expressing that value.
Pft you must have read that wrong, its clearly turning them into master programmer one query at a time.