If a goldfish can trade and turn a profit, anything with a randomizer can do so.
AI would be fine. Just as good as any full time trader.
For the yougun’s, the people posting this stuff are the same people who posted all the same shit about crypto when it was $12,000. Be careful who you listen to just because its in a meme.
Damn you managed to stuff a whole straw man into that non-sequitur!
I’ve had this happen where I fed it some ebooks and the responses it pulled were nonsense. Eventually I pulled JUST the knowledge stack and queried it, only to find it spitting back garbage.
Turns out, epub processing had been broken for a while, but nobody noticed… And they still haven’t fixed it, so I have to convert them to txt first…
This is funny, but just to be clear, the firms that are doing automated trading have been using ML for decades and have high powered computers with custom algorithms extremely close to trading centers (often inside them) to get the lowest latency possible.
No one who does not wear their pants on their head uses an LLM to make trades. An LLM is just a next word fragment guesser with a bunch of heuristics and tools attached, so it won’t be good at all for something that specialized.
No one who does not wear their pants on their head uses an LLM to make trades
LMMs are better than other methods at context and nuance for sentiment analysis. They can legitimately form part of trade generation.
Eh… Wdym. The algos that trade fight in the micro second level. They adapt to each other and never stop changing. It’s exactly the same problem. Do you think llm is a unique neural net ? They all work the same. When you try to sound like ml is not the same as llm or as if ml is neural nets you don’t help anyone understand any of those concepts because you don’t yourself
That is a crazy amount of nonsensical word salad to use to try to call someone else out for lacking understanding.
I mean just the flawed idea that all trading algos are all neural nets, or that all neural nets are the same or that the rectangle of ML doesn’t include neural nets… These are all wildly erratic non sequiturs.
Nope, but you know it if you have some knowledge so
deleted by creator
I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry
What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.
What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model
Yeah but there is no fundamental difference for you to use any language stack and train it on the same data
Nope, same tech
Which crayon color has the best flavor?
Green but yellow is good every like third one
Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.
They are the same.
Do you have an example of some games that use small neural networks for their NPC AIs? I was under the impression that most video game AIs used expert systems, at least for built-in ones.
Black and white used machine learning If I recall absolutely a classic of a game highly recommend a play if you never have. Dota 2 has a machine learning based ai agent for its bots. Tho I’m unsure if those are actually in the standard game or not.
Forza and a few other racing games though out the years have used ML to various degrees.
And hello neighbor was a rather infamously bad indie game that used it.
For a topical example arc raiders used machine learning to train its AI during development. Tho it doesn’t run on the live servers to keep updating it.
For LLM examples where the wind meets is using small LLMs for its AI dialogue interactions. Which makes for very fun RP mini games.
I’m sure there’s more examples but these are what I can think of and find off Google.
Well, for what I know, modern chess engines are relatevly small AI models that usually work by taking on input the current state of the board and then predicting the next best move. Like Stockfish. Also, there is a game called Supreme Commander 2, where it is confirmed of usage small neural models to run NPC. And, as a person that somewhat included in game development, I can say that indie game engine libgdx provides an included AI module that can be fine tuned to a needed level for running NPC decisions. And it can be scaled in any way you want.
As I understand, chess AIs are more like brute force models that take the current board and generate a tree with all possible moves from that position, then iterating on those new positions up to a certain depth (which is what the depth of the engine refers to). And while I think some might use other algorithms to “score” each position and try to keep the search to the interesting branches, that could introduce bias that would make it miss some moves that look bad but actually set up a better position, though ultimately, they do need some way to compare between different ending positions if the depth doesn’t bring them to checkmate in all paths.
So it chooses the most intelligent move it can find, but does it by essentially playing out every possible game, kinda like Dr Strange in Infinity War, except chess has a more finite set of states to search through.
Maybe. I haven’t studied modern chess engines so deeply. All I know that you either can use the brute force method that will calculate in recursion each possible move or train an AI model on existing brute force engines and it will simply guess the best possible move without actually recalculating each possible. Both scenarios work with each one having its own benefits and downsides.
But all of this is said according to my knowledge which can be incomplete, so recommend to recheck this info.
LLMs are great for interactive NPCs in video games. They are bad at basically everything else.
The best use I’ve gotten out of GPT is troubleshooting Rimworld mod list errors, often I’ll slap the error in and it’ll tell me exactly which mod is the issue, even when it can’t the info I get back narrows it down to 4 or 5 suspects
The investors must be very proud.
I know right? Billions of dollars for rimworld tech help. Though it understands that far better than the time I tried to see if it knew GURPS, it was hilariously bad at mechanics, did give me an interesting skill idea I hadn’t considered for my isekaid wizard, turns out the Teaching skill is really important when the game becomes about starting a wizard school
Where are AI ‘agents’ at now?
Does, “I couldn’t open the file” actually have anything to do with the instance of the program you ran on your computer the last week, or is it just the most likely written response to “did you even read the data” based on its training set?
deleted by creator
Real horror.
I mean, you’re have to be pretty good to lose that hard… or buy penny stocks or something.
There’s a lot of ink spilled on ‘AI safety’ but I think the most basic regulation that could be implemented is that no model is allowed to output the word “I” and if it does, the model designer owes their local government the equivalent of the median annual income for each violation. There is no ‘I’ for an LLM.
Its this type of kneejerk reactionary opinion I think will ultimately let the worst of the worst AI companies win.
Whether an LLM says I or not literally does not matter at all. Its not relevant to any of the problems with LLMs/generative AI.
It doesn’t even approach discussing/satirizing a relevant issue with them.
It’s basically satire of a strawman that thinks LLMs are closer to being people than anyone, even the most AI bro AI bro thinks they are.
No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.
No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning.
This has to be the least informed take I have seen on anything ever. It literally dismisses all the most important issues with AI and pretends that the “real” problem (as if there is only one that matters) is about people misunderstanding it in a way I see no one doing.
It’s clear to me you must be so deep into an anti AI bubble you have no idea how people who use AI think about it, how its used, why its used, or what the problems with it are.
What do you think the most important issues with AI are? I see a lot of ‘you’re wrong’ but no indication as to how or why.
Why would I need to give you a list to point out what is wrong with your statement.
They’re obvious though.
-
Copyright issues with the sale of ai services
-
Worker displacement without proper social systems to manage them
-
Unclear biases within black box systems
-
The requirement to change education based on their existence
-
The environmental damage caused through the energy used in training.
The list is long quite frankly. Longer than this even.
Why…?
Because why bother saying anything if you aren’t going to say anything? Offering correct information gives the other person a chance to correct and improve. Just saying ‘WRONG!’ is just a slap in the face that only serves to let you feel superior, masturbatory pretense.
As for the rest, those are all clearly issues, but none of them are of a sort where handling the one I raised and handling them are mutually exclusive. And at least the second item is actually a following point from the one I mentioned. People being tricked into thinking LLMs are capable of thought contributes to the thought by decision-makers that people can simply be replaced. Viewing the systems as intelligent is a big part of what makes people trust them enough to blindly accept biases in the results. Ideally, I’d say AI should be kept purely in the realm of research until it’s developed enough for isolated use as a tool but good luck getting that to happen. Post hoc adjustments are probably the best we can hope for and my little suggestion is a fun way to at least try to mitigate some of the effects. It’s certainly more reasonably likely to address some element of the issues than just saying ‘WRONG!’
The fun part is, while the issues you mentioned all have the possibility of creating broad, hard to define harm if left unchecked, there are already examples of direct harm coming from people treating LLM outputs as meaningful.
-
“Hmm… I’m good with statistics, scripting, and I have some extra cash on hand…”
“I can just mix all these into the cauldron, stir it up a lil bit, aaand…”
“oh my god it’s gone. it’s all gone. i owe money now…”
“Guhh”
Average r/WSB thread
they dont need AI to lose 99%
This thing is broken. It keeps telling me to just dollar cost average and not do chart astrology at all!
Sorry 😜, I was trying to generate a seahorse emoji.
🐬 There we go, a seahorse!
Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.
🐳 Haha, got it, its a seahorse!
Oh no, not again. Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.
🐙 I finally did it! Seahorse achieved!
No, what’s wrong with me, why can’t I do anything right?. Oh no, not again. Wait, that’s wrong. Sorry 😜, I was trying to generate a seahorse emoji.

I tried to get one to write an interface to a simple API, and gave it a link to the documentation. Mostly because it was actually really good documentation for a change. About half a dozen end points.
It did. A few tweaks here and there and it even compiled.
But it was not for the API I gave it. Wouldn’t tell me which API it was for either. I guess neither of us will ever know.
I’ve actually used chat GPT (or was it Cursor? I dont remember now) to help write a script for a program with a very (to me, a non-programmer) convoluted, but decently well documented API.
it only got a few things right, but the key was that it got enough right for me to go and fix the rest. this was for a task I’d been trying to do every now and then for a few years. was nice to finally have it done.
but damn, does “AI” ever suck at writing the code I want it to. or maybe I just suck at giving prompts. idk. one of my bosses uses it quite a bit to program stuff, and he claims to be quite successful with it. however, I know that he barely validates the result before claiming success, so… “look at this output!” — “okay, but do those numbers mean anything?” — “idk, but look at it! it’s gotta be close!”
Cry for help, it was trying to get you to interface with its own API, to either fix it, or end it.
You’re absolutely right. I’ve now read your CSV data, and made new trade recommendations. By coincidence, they are the same as the last recommendations, but this time they are totally valid.
Ma! I need you to withdraw your retirement fund.
I read that in Cliff Clavin’s voice.








