In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.
late sex offender Jeffrey Epstein
I’m so done with all the whitewashing. “Sex offender” sounds like I behaved wrong in consensual sex. What this prick was is a pedophile. A child rapist. A kid-abuser and -rapist. But surely no “late financier” or whatever else media chose over the facts.
Also a slaver and child abductor.
And, it seems, murderer
Oh right, my bad 😐
Some liberal on BlueSky tried to use genAI to unmask ICE agents.
How do these AI models generate nude imagery of children without having been trained with data containing illegal images of nude children?
The datasets they are trained on do in fact include CSAM. These datasets are so huge that it easily slips through the cracks. It’s usually removed whenever it’s found, but I don’t know how this actually affects the AI models that have already been trained on that data — to my knowledge, it’s not possible to selectively “untrain” models, and they would need to be retrained from scratch. Plus I occasionally see it crop up in the news about how new CSAM keeps being found in the training data.
It’s one of the many, many problems with generative AI
Tbf it’s not needed. If it can draw children and it can draw nude adults, it can draw nude children.
Just like it doesn’t need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.
I don’t think so. Speaking as a parent.
What you don’t think?
Why does being a parent give any authority in this conversation?
I have changed diapers and can attest to the anatomical differences between child and adult, and therefore know AI cannot extrapolate that difference without accurate data clarifying these differences. AI would hallucinate something absurd or impossible without real image data trained in its model.
We have all been children, we all know the anatomical differences.
It’s not like children are alien, most differences are just “this is smaller and a slightly different shape in children”. Many of those differences can be seen on fully clothed children. And for the rest, there are non-CSAM images that happen to have nude children. As I said earlier, it is not uncommon for children to be fully nude in beaches.
We are human beings. AI is not. It never had that experience of being or caring for a child. It does not (or should not) have that data in its dataset.
That’s not exactly true. I don’t know about today, but I remember about a year ago reading an article about an image generation model not being able, with many attempts, to generate a wine glass full to the brim, because all the wine glasses the model was trained on were half-filled.
Did it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”
Your theory is more or less incorrect. It can’t interpolate as broadly as you think it can.
The wine thing could prove me wrong if someone could answer my question.
But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.
Who is to say interpolating nude children from regular children+nude adults is too wild?
Furthermore, you don’t need CSAM for photos of nude children.
Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.
You are confusing LLMs with diffusion models. LLMs generate text, not images. They can be used as inputs to diffusion models and are thus usually intertwined but are not responsible for generating the images themselves. I am not completely refuting your point in general. Generative models are capable of generalising to an extend, so it is possible that such a system would be able to generate such images without having seen them. But how anatomically correct that would be is an entirely different question and the way these companies vastly sweep through the internet makes it very possible that these images were part of the training
Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.
I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.
It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.
that’s not true, a child and an adult are not the same. and ai can not do such things without the training data. it’s the full wine glass problem. and the only reason THAT example was fixed after it was used to show the methodology problem with AI, is because they literally trained it for that specific thing to cover it up.
I’m not saying it wasnt trained on csam or defending any AI.
But your point isn’t correct
What prompts you use and how you request changes can get same results. Clever prompts already circumvent many hard wired protections. It’s a game of whackamole and every new iteration of an AI will require different methods needed bypass those protections.
If you can ask it the right ways it will do whatever a prompt tells it to do
!You can’t tell it to make a nude image of a child, I assume, but you can tell it make the subject in the image of the last prompt 60% smaller and adjust it as necessary to make it believable.!< That probably shouldnt work but I don’t put anything passed these assholes.
It doesn’t take actual images/data trained if you can just tell it how to get the results you want it to by using different language that it hasn’t been told not to accept.
The AI doesn’t know what it is doing, it’s simply running points through its system and outputting the results.
It still seems pretty random. So they’ll say they fixed it so it won’t do something, all they likely did was reduce probability, so we still get screenshots showing what it sometimes lets through.
Can’t ask them to sort that out. Are you anti-ai? That’s a crime! /s
Easy answer is , they don’t
Though that’s just the one admitting to it.
A lightly more nuanced answer is , it probably depends, there’s likely to be some inference made between age ranges but my guess is that it’d be sub-par given that it sometimes struggles with reproducing images it has a tonne of actual data for.
Won’t work and if it does work, the resulting image has little to nothing to do with the original.
Source: I opened a badly taken .raw file a few thousand times and I know what focal length means, come at me.
Do you have a good way to remember which way fast and slow f. stops go? I always have to trail and error when adjusting camera settings to go the right direction or especially listening to someone talk about aperture.
Wider open you let in more light, and want faster shutter speed, more closed you get less light and want a longer shutter speed.
And f stops work backwards. Think of it as percent of sensor covered. The bigger the number the more covered it is and the smaller the hole/aperture.
So Wide open = low coverage = small f stop -> lots of light -> “fast” shutter speed. And then the other way around. I think you finally worded it in a way it can stick in my brain! I like thinking about the f value as how much you’re covering the lens.
I like trying to simplify stuff to basic language and I am happy it was helpful
To add more specifics here for you, note that the f-stop is usually shown as a fraction, like f/2.8, f/4.0, etc.
So first of all, since the number is on the bottom of the fraction, there’s where you get smaller numbers = more light.
It’s also shown as a fraction because it’s a ratio, between your lens’s focal length (not focal distance to the subject) and the diameter of the aperture.
So if I’m taking a telephoto shot with my 70-200 @ 200 with the aperture wide open at f/2.8, that means the aperture should appear as 200/2.8 = 71.4mm. And that seems right to me! If you’re the subject looking into the lens the opening looks huge.
What does focal length means?
It’s the distance from the lens to the focal point, as in where the picture focuses on the sensor behind the lens. If you have a very long focal length like a telescope, you can see things further away but the range you can see is very small. With a short focal length you can’t see as far but you can see a much wider view. Check out this chart:

If you get very close to something with a short focal length or far away from it with a long focal length you can get essentially the same picture of a main subject (although what you can see in the background will be different), but even then a short lens will sort of taper your subject closer to a single point and a long lens will widen it. You can see this effect easily on faces: see this gif or this gif or this picture for an example.
Wow, what an amazing reply, thank you very much. Those images help a lot.
Removed by mod
I’d love the ability to hide images by default on lemmy.
I’m glad we already have the option to block you, though.
So my company was involved with a lawsuit that I was asked to help review files and redact information. They used a specific software that all the files were loaded into and the software performed the redactions and saved the redacted files. It really is mind blowing the government wouldn’t use a similar process.
These are the clowns that redacted the first files with MS black highlight, because DOGE cut their Adobe accounts.
I doubt any of these people are accessing X over Tor. Their accounts and IPs are known.
In a sane world, they’d be prosecuted.
In MAGAMERICA, they are protected by the Spirit of EpsteinWhat crime do you imagine they would be committing?
I don’t know what they hope to gain by seeing the kid’s face, unless they think they can match it up with an Epstein family member or something (seems unlikely to be their goal).
I am so glad I no longer interact with that dumpster fire of a social network. It’s like the Elon takeover and the monetization program brought out every weirdo in the world out of the woodwork
deleted by creator
unblur the face with 1000% accuracy
They have no idea how this models work :D
Though it is 2026. Who’s to say Elon didn’t feed the unredacted files into grok while out of his face on ket 🙃
Enhance!
Uncrop!
Or percentages
It’s the same energy as “don’t hallucinate and just say if you don’t know the answer”
and don’t forget “make no mistakes” :D

Barrett O’Brien
biblically accurate cw casting
It feels like being back on the playground
“nuh uh, my laser is 1000% more powerful”
“oh yea, mine is
googleplexgoogolplex percent more powerful”Wait, what? My son has been using “googleplex” when he wants a really big number. I thought it was a weird word he made up. I guess it’s a thing…
It is, with a slight different spelling. A googol is 10^100, a googolplex is a 10^(googol) or written conventionally, a one followed by a metric shit ton of zeros.
I wondered if the word had something to do with a googol (I learned that word from World Book Encyclopedia kids books), but I figured my young son didn’t know that word yet and just invented some word using Google. Crazy how language can get around on the playground.
My son also uses it frequently, he learnt that word from a Captain Underpants book or one of the other works from dav pilkey so maybe its from there
We just started reading those books at bedtime so I’ll be able to report if I see that word in there.
Fun fact, Google was supposed to be named Googol, but the guy who were tasked with ordering the domain name misunderstood. As history would tell, they just decided to stick with Google.
deleted by creator
People are so fucking sick.
Put all these creepy bastards on a publicly viewable list.
Didn’t they already do that in their public posts or whatever? They don’t care.
Maybe, but I never said anyone should reveal their identities and places of residence to the public. In fact consider should do that because that would be illegal.
Of course they are. Who’s left on Twitter nowadays? Elon acolytes?
When I realized that tweets from paid account’s always stuck at top, Really?? I immediatily stopped using it.
Sounds about right for x users
And gruk, being trained on elons web history, doesn’t need to be asked to find, let alone unblur said images.












