Ohhh. I think we’re both defending different hills! I’m not against the use of generative AI for purposeful creation. What I’m against is the delegation of critical thinking.
It’s the difference between:
“Implement this specific feature this specific way. Never disable type checking or relax type strictness, never solve a problem using trial and error, consult documentation first, don’t make assumptions and stop and ask for guidance if you’re unsure about anything”
“Paint me a photorealistic depiction of a galaxy spinning around the wick of a candle”
(That last one is admittedly my own guilty contribution to the slop soup and favourite desktop background of at least a whole year)
Versus:
“build me an e-shop”
“draw me a cat”.
The difference is oversight and vision. The first two are asking AI to execute well-defined tasks with explicit parameters and rules, the first example in particular offers the LLM an out if it finds itself at an impasse.
The latter examples are asking a prediction engine to predict a vague concept. Don’t expect originality/innovation from something that was forcibly constrained to pick from a soup made of prior art then locked down, because that’s what gradient descent essentially does to the neural networks during training: reduce the error margin by restricting the possible solutions for any given problem to only what is possible within the training set, which is also known as plagiarism.
Edit: a slight elaboration on the last part:
Neural networks trained with gradient descent will do the absolute minimum to reach a solution. That’s the nature of the training process.
What this essentially means is that effort scales with prompt complexity! A simple/basic prompt begets you the most generic result possible. Because it allows the network to slide along the shortest path from the input token to a very predictable result.
Right, because you don’t believe art has value or requires thinking. It’s essentially worthless to you.
What people with a critical eye for art can tell you is that the art also has well-defined tasks with explicit perimeters and rules, and generative AI produces uncreative slop that looks amateurish if you have an eye for art. It’s pretty typical for people to believe generative AI is better at tasks that they, personally, know nothing about - it’s a Dunning-Kruger machine.
Did you actually read anything I wrote or just skim every other line to confirm your biases and what you wanted to see?
Did you assume that coding is my only calling or something? I rap, write fiction, poetry, and I’m into philosophy. Wtf are you on about?
I wouldn’t be pissed about it if it meant nothing to me. Did you read what I actually wrote?
The first example I gave for an art prompt had an actual artistic premise: galaxy in the shape of a candle
The thoughtless one I gave in contrast below that was “draw me a cat”
The only difference between the first two lines (in my original post) is the explicit behaviour constraint in the code assistant’s case which you wouldn’t want for a freeform creative prompt, and the fact that the image generator can’t stop to ask for advice afaik.
Ohhh. I think we’re both defending different hills! I’m not against the use of generative AI for purposeful creation. What I’m against is the delegation of critical thinking.
It’s the difference between:
(That last one is admittedly my own guilty contribution to the slop soup and favourite desktop background of at least a whole year)
Versus:
The difference is oversight and vision. The first two are asking AI to execute well-defined tasks with explicit parameters and rules, the first example in particular offers the LLM an out if it finds itself at an impasse.
The latter examples are asking a prediction engine to predict a vague concept. Don’t expect originality/innovation from something that was forcibly constrained to pick from a soup made of prior art then locked down, because that’s what gradient descent essentially does to the neural networks during training: reduce the error margin by restricting the possible solutions for any given problem to only what is possible within the training set, which is also known as plagiarism.
Edit: a slight elaboration on the last part:
Neural networks trained with gradient descent will do the absolute minimum to reach a solution. That’s the nature of the training process.
What this essentially means is that effort scales with prompt complexity! A simple/basic prompt begets you the most generic result possible. Because it allows the network to slide along the shortest path from the input token to a very predictable result.
Right, because you don’t believe art has value or requires thinking. It’s essentially worthless to you.
What people with a critical eye for art can tell you is that the art also has well-defined tasks with explicit perimeters and rules, and generative AI produces uncreative slop that looks amateurish if you have an eye for art. It’s pretty typical for people to believe generative AI is better at tasks that they, personally, know nothing about - it’s a Dunning-Kruger machine.
Sorry, but Wtf?
Did you actually read anything I wrote or just skim every other line to confirm your biases and what you wanted to see?
Did you assume that coding is my only calling or something? I rap, write fiction, poetry, and I’m into philosophy. Wtf are you on about?
I wouldn’t be pissed about it if it meant nothing to me. Did you read what I actually wrote?
The first example I gave for an art prompt had an actual artistic premise: galaxy in the shape of a candle
The thoughtless one I gave in contrast below that was “draw me a cat”
The only difference between the first two lines (in my original post) is the explicit behaviour constraint in the code assistant’s case which you wouldn’t want for a freeform creative prompt, and the fact that the image generator can’t stop to ask for advice afaik.
So what did I do wrong here?
Edit: clarity