I’ve recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.
This is not an opinion. You have made a statement of fact. And you are wrong.
At law, something being publicly available does not mean it is allowed to be used for any purpose. Copyright law still applies. In most countries, making something publicly available does not cause all copyrights to be disclaimed on it. You are still not permitted to, for example, repost it elsewhere without the copyright holder’s permission, or, as some courts have ruled, use it to train an AI that then creates derivative works. Derivative works are not permitted without the copyright holder’s permission. Courts have ruled that this could mean everything an AI generates is a derivative work of everything in its training data and, therefore, copyright infringement.
Saying that statistical analysis is derivative work is a massive stretch. Generative AI is just a way of representing statistical data. It’s not particularly informative or useful (it may be subject to random noise to create something new, for example), but calling it a derivative work in the same way that fan-fiction is derivative is disingenuous at best.
Tracing a picture to make an outline in pencil is a derivative work. There’s plenty of court cases ruling on this.
A convolutional neural network applies a kernel over the input layer to (for example) detect edges and output to the next layer a digital equivalent of a tracing.
Why would the CNN not be a derivative work if tracing by hand is?
Tracing is fine if you use it to learn how to draw. It’s not fine if it ends up in the finished product. Determining if it ends up in the finished product with AI either means finding the exact pattern in the AI’s output (which you will not), or clearly understanding how AI use their training data (which we do not)