We’re cooked.
I feel the bus one is actually quite easy to spot as fake. There’s no one with head down looking at their phone.


Most of these images have really shitty resolution as well. Can’t they generate higher res stuff or would inconsistencies otherwise be more obvious?
Directly, generating higher res stuff requires way more compute. But there are plenty of AI upscalers out there, some better, some worse. These are also built into Photoshop now. The difference between an AI image that is easy to spot and hard to spot is using good models. The difference between an AI image that is hard to spot and nearly impossible to spot is another 20 min of work in post.
So how long untill digital photos are inadmissble in court?
I mean of course it is. It got basically infinite training data
Nano Banana Pro’s built into Photoshop now, as is Flux Kontext pro.
Seems like very thinly veiled advertising for a new version of google’s ai image generation.
If AI is getting this good at imitating the things that signal a photo is real, then guys: We are cooked.
“We are cooked, fellow kids!”
The author also pretty much says “all other AI was slop before this, right guys?”
Yeah, a more honest take would discuss the strengths & weakness of the model. Flux is still better at text than Nano Banana, for instance. There’s no “one model to rule them all,” as much as tech journalism seems to want to write like that.




