Most of the stuff known as AI in the current environment is really, really powerful inference engines. And understanding the limits of inference (see for example Hume’s Problem of Induction) is an important part of understanding the appropriate scope of where these tools are actually useful and where they’re actively misleading or dangerous.
So, take the example of filling in unknown details in a low resolution image. We might be able to double the number of pixels and try to fill in our best guesses of what belongs in the in-between pixels that weren’t in the original image. That’s probably a pretty good use of inference.
But guessing what’s off the edge of the picture is built on a less stable and predictable process, less grounded in what is probably true.
When we use these technologies, we need domain-specific expertise to be able to define which problems are the interstitial type where inferential engines are good at filling things in, and which are trying to venture beyond the frontier of what is known/proven and susceptible to “hallucination.”
That’s why there’s likely going to be a combination of things that are improved and worsened by the explosion of AI capabilities, and it’ll be up to us to know which is which.
Most of the stuff known as AI in the current environment is really, really powerful inference engines. And understanding the limits of inference (see for example Hume’s Problem of Induction) is an important part of understanding the appropriate scope of where these tools are actually useful and where they’re actively misleading or dangerous.
So, take the example of filling in unknown details in a low resolution image. We might be able to double the number of pixels and try to fill in our best guesses of what belongs in the in-between pixels that weren’t in the original image. That’s probably a pretty good use of inference.
But guessing what’s off the edge of the picture is built on a less stable and predictable process, less grounded in what is probably true.
When we use these technologies, we need domain-specific expertise to be able to define which problems are the interstitial type where inferential engines are good at filling things in, and which are trying to venture beyond the frontier of what is known/proven and susceptible to “hallucination.”
That’s why there’s likely going to be a combination of things that are improved and worsened by the explosion of AI capabilities, and it’ll be up to us to know which is which.