Even the best models fine tuned for coding still have training that was based on both good and bad examples of programming from humans. And since it’s not AGI but using probability to generate the code, you’re going to get crap programming logic dependent on how often such things were used and suggested by humans to other humans. Googling for an answer on how to code something pulls up all sorts of answers from many sources, but reading through them, many are terrible. An LLM doesn’t know that, it just knows that humans liked some answers better than others, so GIGO.
Gorilla In Gorilla Out?
Giraffe In Giraffe Out
Gorilla In Giraffe Out
That would be the real trick.
Fantastic for building BaaS apps
Bullshit as a Service?
Bananas as a Service :)
bananas in pyjamas
Sounds like a good time
Almost definitely both were involved.
crack heads, meth heads, what’s the diff
What is the Tea hack?
An app called Tea™ was marketed as a safespace for women and used government issued IDs as a way to verify users.
4Chan users leaked all of the IDs onto the larger internet.
Wow what a fuckin shitshow. I have so many follow-up questions
So it essentially became a honey trap, either through malice or sheer incompetence.
Well, I get what you mean, but a “honey trap” idiom in English, also called a “honeypot scheme”, usually refers to utilizing romantic connections to influence people to make decisions or release confidential information.
Honeypot is a common term in computing/cybersecurity, setting up fake important servers so bad actors invade and the security team can analyze what got in and how to deal with it.
Well it doesnt surprise me that the IT team doesn’t know about a sexual terminology, tbh.
They’re all over master-slave, tho 😏
I remember when a senior developer where i worked was tired of connecting to the servers to check its configuration, so they added a public facing rest endpoint that just dumped the entire active config, including credentials and secrets
That was a smaller slip-up than exposing a database like that (he just forgot that the config contained secrets) but still funny that it happened
I would have put IP address access restrictions on that at the very least. I may have even done something like that more than once for various tools in the past.
That way it acts completely open to people (or other servers) in the right places and denies all knowledge to anything else.
That’s not a “senior developer.” That’s a developer that has just been around for too long.
Secrets shouldn’t be in configurations, and developers shouldn’t be mucking around in production, nor with production data.
Yeah the whole config thing in that project was an eldritch horror of a legacy, too ingrained in both the services and tooling to be modified without massive rewrites
That’s just a senile developer
who’d have thought that javascript and client side programming was incredibly susceptible to security flaws and deeply unsafe
who’d have thought that being shitty programmer was incredibly susceptible to security flaws and deeply unsafe instead of javascript
No, it must be JavaScript that is the problem
principal_skinner.jpg.exe
Microsoft defender identified a malware in this executable.

Wow. It actually identified something?
It’s good enough for corporate (with multiple other lines of defense).
As much as I dislike JavaScript, it isn’t responsible for this. The person (or AI) and their stupidity is.
but it didn’t help; it was basically the gasoline
In what way?
When i tried making a website with gemini cli it did deadass use string interpolation for sql queries so everything is possible
Robert’); DROP TABLE Students; –
aw bobby
Peak Vibe Coding results.
while True:
Jesus Christ
You know that’s not the Tea code, but the downloader, right?
They’re also not using requests very efficiently, so who knows.
Other reports state the Tea backend was Vibe Coded: https://www.ainvest.com/news/tea-app-data-breach-exposes-72-000-users-ai-generated-code-security-lapse-2507/
Sure, it might be, I’m not saying it isn’t. All I’m saying is: the screenshot shows the code someone wrote to download the images. It’s not part of the Tea codebase.
There’s nothing wrong with manually breaking a loop.
There’s nothing wrong with eating a banana with a knife and fork, either.
Except living with the shame.
Most monkey-esque insult
Well these people probably don’t wash their hands so knife fork is the most sanitary way.
An infinite loop used to be such a rank code smell back when I was a junior, specifically because I was a noob and made giant loops like 50 lines long and invariably didn’t plan the exit condition right, and then my computer would lock up and I would have to hard power cycle.
But yeah, now it’s it’s a totally acceptable little pattern imho.
eh
Disabling index and making the names UUID would make the directory inviolable even if the address was publicly available.
Sounds like a good case for brute forcing the filenames. Just do the proper thing and don’t leave your cloud storage publicly accessible.
While proper security is better, you’re not gonna brute force UUIDs.
As long as you’re not rate limited, you absolutely could.
A UUID v4 has 122 bits of randomness. Do you know how long that would take to brute-force, especially with network limitations?
It taking a long time doesn’t make it an impossibility. The fact that it has a limit of 122 bits, in and of itself, makes the possibility of a bruteforce a mathematical guarantee.
For all practical purposes, it’s impossible.
It’s not, though. And thinking that it is impossible is why DES, for example, was “translatable” by the NSA for decades. Never assume something is impossible just because it’s difficult.
By this logic, all crypto is bruteforcable, on a long enough timeline.
A 122 bit random number is 5316911983139663491615228241121378303 possible values. Even if it were possible to check 1 trillion records per second, it would take 168598173000000000 years to check all the UUIDs and get the info on all the users. Even if every human on earth signed up for the app (~8 billion people), and you wanted to just find any one valid UUID, the odds of a generating a UUID and that being valid in their DB is basically 0. You can do the math your self following the Birthday Paradox to determine how many times you would need to guess UUIDs before the probability that any one UUID is valid against a population of the whole world is greater than 50%.
You should read into the NSA’s Translator. Granted, it’s relatively outdated with shifting text algorithms, but for a very long time (about half a century), it was able to bruteforce any key, regardless of length, in under an hour.
You cannot!
I cannot. But the bruteforce is a mathematical guarantee.
And has nothing to do with my proposition.
Can’t be done.
Bet you could reuse/keep UUIDs for someone/stuff that gets updated and get that new data even if you “shouldn’t”.
It could work in theory but in practice there are always a billion things that go wrong IMO.
Not really sure what you mean by reusing UUIDs but theres nothing bad about using UUIDs in URLs for content you don’t want scrapped by bots. Sites like Google Photos are already are using UUIDs in the URL for the photos, and do not require any authentication to see the image as long as you have the URL. You can try this for yourself and copy the URL of an image and open it in a Private Browsing Window. Every so often someone realizes the actual image URL is public and think they’ve found a serious issue, but the reason why it isn’t is because of the massive key space UUID provides and that it would be infeasible to check every possible URL, even if it’s publicly available.
You point out the “vulnerability” yourself, sometimes (when it’s Google) it works as designed, but a less robust site could have the full access through a UUID for example and then someone shares an image with it, bam they have access to more than they should. The history is littered with bulletproof things like this ending up being used wrongly.
Security through obscurity never works.
It’s not security through obscurity in this case. The filenames can’t be obtained or guessed through brute force. At least not with current technology or processing power…
Security through obscurity is when you hide implementation details.
Saying that my suggestion is security through obscurity is the same as telling that ASLR is security through obscurity…
Until the psuedo random UUID generator can be reverse engineered. Makes me think of this video: https://youtu.be/o5IySpAkThg
Anyway, I think we’re on the same wavelength and both agree that the implementation as is isn’t production-ready to say the least ;)
Guess someone spilled the tea
I always get irrationally angry when i see python code using os.path instead of pathlib. What is this, the nineties?
What big advantages does pathlib provide? os.path works just fine
- Everything is in one library which offers consistency for all operations.
- You can use forward slashes on Windows paths, which makes for much better readability.
- You can access all the parts of a pathlib object with attributes like .stem, .suffix or .parent.
- You can easily find the differences between paths with .relative_to()
- You can easily build up complex paths with the / operator (no string additions).
Just off the top of my head.
if you don’t need those, why burden the program with another dependency?
It’s in the standard library, just like os or shutil.
I suppose os.path is simpler? It’s a string and operation.
Python is all about ‘attention efficiency,’ which there’s something to be said for. People taking the path of least resistance (instead of eating time learning the more complex/OOP pathlib) to bang out their script where they just need to move a file or something makes sense. I’m with you here, but it makes sense.
…Also, os.path has much better Google SEO, heh.
And what’s with the string addition? Never heard of f-strings or even .format()?
Make a PR
Be the change you want to see in the world.
deleted by creator
dev came from marketing. pictures wouldn’t show up with all that security enabled.
AI just enables the shit programmers to create a greater volume of shit
My favorite one I’ve seen so far was “AI can take a junior programmer and make them a 10x junior programmer.”
This reminds me of how I showed a friend and her company how to get databases from BLS and it’s basically all just text files with urls. “What API did you call? How did you scrape the data?”
Nah man, it’s just… there. As government data should be. They called it a hack.
When getting data legitimately is beyond them…
ah yes, the forbidden curl hack
These people should serve jail time. I’m not kidding.
I’m no lawyer, but this seems like at least grounds for a class action lawsuit, I would think. Like, it seems like privacy and security is implied (however ironic for an app like this) when requiring users to upload their PII.
Also, I assume their privacy policy didn’t mention that they were just gonna publish their users’ PII.
Does anyone have a source for this?
The original article is paywalled (I mean, registration-walled?), this summary is not
404 Media reported that 4chan users claimed to be sharing personal data and selfies from Tea after discovering an exposed database.






















