Ah, you used logic. That’s the issue. They don’t do that.
deleted by creator
I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.
AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn’t AGI.
I don’t think ai is being marketed as awesome at everything. It’s got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It’s a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
Marketing does not mean functionality. AI is absolutely being sold to the public and enterprises as something that can solve everything. Obviously it can’t, but it’s being sold that way. I would bet the average person would be surprised by this headline solely on what they’ve heard about the capabilities of AI.
I don’t think anyone is so stupid to believe current ai can solve everything.
And honestly, I didn’t see any marketing material that would claim that.
You are both completely over estimating the intelligence level of “anyone” and not living in the same AI marketed universe as the rest of us. People are stupid. Really stupid.
I don’t understand why this is so important, marketing is all about exaggerating, why expect something different here.
It’s not important. You said AI isn’t being marketed to be able to do everything. I said yes it is. That’s it.
The Zoom CEO, that is the video calling software, wanted to train AIs on your work emails and chat messages to create AI personalities you could send to the meetings you’re paid to sit through while you drink Corona on the beach and receive a “summary” later.
The Zoom CEO, that is the video calling software, seems like a pretty stupid guy?
Yeah. Yeah, he really does. Really… fuckin’… dumb.
Same genius who forced all his own employees back into the office. An incomprehensibly stupid maneuver by an organization that literally owes its success to people working from home.
What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.
Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.
TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.
I think in many cases people intentionally or unintentionally disregard the time component here. Ai is in development. I think what is being marketed here, just like in the stock market, is a piece of the future. I don’t expect the models I use to be perfect and not make mistakes, so I use them accordingly. They are useful for what I use them for and I wouldn’t use them for chess. I don’t expect that laundry detergent to be just as perfect in the commercial either.
Really then why are they cramming AI into every app and every device and replacing jobs with it and claiming they’re saving so much time and money and they’re the best now the hardest working most efficient company and this is the future and they have a director of AI vision that’s right a director of AI vision a true visionary to lead us into the promised land where we will make money automatically please bro just let this be the automatic money cheat oh god I’m about to
Those are two different things.
-
they are craming ai everywhere because nobody wants to miss the boat and because it plays well in the stock market.
-
the people claiming it’s awesome and that they are doing I don’t know what with it, replacing people are mostly influencers and a few deluded people.
Ai can help people in many different roles today, so it makes sense to use it. Even in roles that is not particularly useful, it makes sense to prepare for when it is.
it makes sense to prepare for when it is.
Pfft, okay.
-
Not to help the AI companies, but why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff? It’s obvious they’re shit at it, why do they answer anyway? It’s because they’re programmed by know-it-all programmers, isn’t it.
…or a simple counter to count the r in strawberry. Because that’s more difficult than one might think and they are starting to do this now.
I think they’re trying to do that. But AI can still fail at that lol
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
Because the AI doesn’t know what it’s being asked, it’s just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.
“Why doesn’t the man in the Chinese room just use a calculator for math questions?”
why don’t they program them
AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.
Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.
There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.
If you pay for chatgpt you can connect it with wolfrenalpha and it’s relays the maths to it
I don’t pay for ChatGPT and just used the Wolfram GPT. They made the custom GPTs non-paid at some point.
This is where MCP comes in. It’s a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.
From a technology standpoint, nothing is stopping them. From a business standpoint: hubris.
To put time and effort into creating traditional logic based algorithms to compensate for this generic math model would be to admit what mathematicians and scientists have known for centuries. That models are good at finding patterns but they do not explain why a relationship exists (if it exists at all). The technology is fundamentally flawed for the use cases that OpenAI is trying to claim it can be used in, and programming around it would be to acknowledge that.
Because the LLMs are now being used to vibe code themselves.
Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
Or they keep telling you that you just have to wait it out. It’s going to get better and better!
why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?
They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.
ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.
I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.
They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc
Articles like this are good because it exposes the flaws with the ai and that it can’t be trusted with complex multi step tasks.
Helps people see that think AI is close to a human that its not and its missing critical functionality
deleted by creator
People already think chatGPT is a general AI. We need more articles like this showing is ineffectiveness at being intelligent. Besides it helps find a limitations of this technology so that we can hopefully use it to argue against every single place
In all fairness. Machine learning in chess engines is actually pretty strong.
AlphaZero was developed by the artificial intelligence and research company DeepMind, which was acquired by Google. It is a computer program that reached a virtually unthinkable level of play using only reinforcement learning and self-play in order to train its neural networks. In other words, it was only given the rules of the game and then played against itself many millions of times (44 million games in the first nine hours, according to DeepMind).
Sure, but machine learning like that is very different to how LLMs are trained and their output.
deleted by creator
I mean, open AI seem to forget it isn’t.
well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like “can chatgpt prove the Riemann hypothesis”
Even the models that pretend to be AGI are not. It’s been proven.
I think that’s generally the point is most people thing chat GPT is this sentient thing that knows everything and… no.
Do they though? No one I talked to, not my coworkers that use it for work, not my friends, not my 72 year old mother think they are sentient.
Okay I maybe exaggerated a bit, but a lot of people think it actually knows things, or is actually smart. Which… it’s not… at all. It’s just pattern recognition. Which was I assume the point of showing it can’t even beat the goddamn Atari because it cannot think or reason, it’s all just copy pasta and pattern recognition.
You’re not wrong, but keep in mind ChatGPT advocates, including the company itself are referring to it as AI, including in marketing. They’re saying it’s a complete, self-learning, constantly-evolving Artificial Intelligence that has been improving itself since release… And it loses to a 4KB video game program from 1979 that can only “think” 2 moves ahead.
deleted by creator
I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can’t think, but it can remember everything so at some point that might tip the results in it’s favor.
deleted by creator
Regurgitating an impression of, not regurgitating verbatim, that’s the problem here.
Chess is 100% deterministic, so it falls flat.
I’m guessing it’s not even hard to get it to “confidently” violate the rules.
Most people do. It’s just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.
Yet even on Lemmy people can’t seem to make sense of these terms and are saying things like “LLM’s are not AI”
deleted by creator
OpenAI has been talking about AGI for years, implying that they are getting closer to it with their products.
https://openai.com/index/planning-for-agi-and-beyond/
https://openai.com/index/elon-musk-wanted-an-openai-for-profit/
Not to even mention all the hype created by the techbros around it.
deleted by creator
Hardly surprising. Llms aren’t -thinking- they’re just shitting out the next token for any given input of tokens.
That’s exactly what thinking is, though.
An LLM is an ordered series of parameterized / weighted nodes which are fed a bunch of tokens, and millions of calculations later result generates the next token to append and repeat the process. It’s like turning a handle on some complex Babbage-esque machine. LLMs use a tiny bit of randomness (“temperature”) when choosing the next token so the responses are not identical each time.
But it is not thinking. Not even remotely so. It’s a simulacrum. If you want to see this, run ollama with the temperature set to 0 e.g.
ollama run gemma3:4b >>> /set parameter temperature 0 >>> what is a leaf
You will get the same answer every single time.
I know what an LLM is doing. You don’t know what your brain is doing.
You say you produce good oranges but my machine for testing apples gave your oranges a very low score.
No, more like “Your marketing team, sales team, the news media at large, and random hype men all insist your orange machine works amazing on any fruit if you know how to use it right. It didn’t work my strawberries when I gave it all the help I could, and was outperformed by my 40 year old strawberry machine. Please stop selling the idea it works on all fruit.”
This study is specifically a counter to the constant hype that these LLMs will revolutionize absolutely everything, and the constant word choices used in discussion of LLMs that imply they have reasoning capabilities.
this is because an LLM is not made for playing chess
A strange game. How about a nice game of Global Thermonuclear War?
I’ve heard the only way to win is to lock down your shelter and strike first.
Lmao! 🤣 that made me spit!!
Frak off, toaster
No thank you. The only winning move is not to play
JOSHUA
Tbf, the article should probably mention the fact that machine learning programs designed to play chess blow everything else out of the water.
Machine learning has existed for many years, now. The issue is with these funding-hungry new companies taking their LLMs, repackaging them as “AI” and attributing every ML win ever to “AI”.
ML programs designed and trained specifically to identify tumors in medical imaging have become good diagnostic tools. But if you read in news that “AI helps cure cancer”, it makes it sound like it was a lone researcher who spent a few minutes engineering the right prompt for Copilot.
Yes a specifically-designed and finely tuned ML program can now beat the best human chess player, but calling it “AI” and bundling it together with the latest Gemini or Claude iteration’s “reasoning capabilities” is intentionally misleading. That’s why articles like this one are needed. ML is a useful tool but far from the “super-human general intelligence” that is meant to replace half of human workers by the power of wishful prompting
I forgot which airline it is but one of the onboard games in the back of a headrest TV was a game called “Beginners Chess” which was notoriously difficult to beat so it was tested against other chess engines and it ranked in like the top five most powerful chess engines ever
It does
It does not. Where?
Yeah its like judging how great a fish is at climbing a tree. But it does show that it’s not real intelligence or reasoning
Don’t call my fish stupid.
Well, can it climb trees?
Llms useless confirmed once again
LLM are not built for logic.
And yet everybody is selling to write code.
The last time I checked, coding was requiring logic.
A lot of writing code is relatively standard patterns and variations on them. For most but the really interesting parts, you could probably write a sufficiently detailed description and get an LLM to produce functional code that does the thing.
Basically for a bunch of common structures and use cases, the logic already exists and is well known and replicated by enough people in enough places in enough languages that an LLM can replicate it well enough, like literally anyone else who has ever written anything in that language.
To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.
So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like “does this language want join as a method on a list with a string argument, or vice versa?”
Problem is this can be sometimes more annoying than it’s worth, as miscompletions are annoying.
Fair point.
I liked the “upgraded autocompletion”, you know, an completion based on the context, just before the time that they pushed it too much with 20 lines of non sense…
Now I am thinking of a way of doing the thing, then I receive a 20 lines suggestion.
So I am checking if that make sense, losing my momentum, only to realize the suggestion us calling shit that don’t exist…
Screw that.
The amount of garbage it spits out in autocomplete is distracting. If it’s constantly making me 5-10% less productive the many times it’s wrong, it should save me a lot of time when it is right, and generally, I haven’t found it able to do that.
Yesterday I tried to prompt it to change around 20 call sites for a function where I had changed the signature. Easy, boring and repetitive, something that a junior could easily do. And all the models were absolutely clueless about it (using copilot)
a decent chunk of coding is stupid boilerplate/minutia that varies
…according to a logic, which means LLMs are bad at it.
I’d say that those details that vary tend not to vary within a language and ecosystem, so a fairly dumb correlative relationship is enough to generally be fine. There’s no way to use logic to infer that it’s obvious that in language X you need to do mylist.join(string) but in language Y you need to do string.join(mylist), but it’s super easy to recognize tokens that suggest those things and a correlation to the vocabulary that matches the context.
Rinse and repeat for things like do I need to specify type and what is the vocabulary for the best type for a numeric value, This variable that makes sense is missing a declaration, does this look to actually be a new distinct variable or just a typo of one that was declared.
But again, I’m thinking mostly in what kind of sort of can work, my experience personally is that it’s wrong so often as to be annoying and get in the way of more traditional completion behaviors that play it safe, though with less help particularly for languages like python or javascript.
All these comments asking “why don’t they just have chatgpt go and look up the correct answer”.
That’s not how it works, you buffoons, it trains off of datasets long before it releases. It doesn’t think. It doesn’t learn after release, it won’t remember things you try to teach it.
Really lowering my faith in humanity when even the AI skeptics don’t understand that it generates statistical representations of an answer based on answers given in the past.
This made my day
Get your booty on the floor tonight.
I mean, that 2600 Chess was built from the ground up to play a good game of chess with variable difficulty levels. I bet there’s days or games when Fischer couldn’t have beaten it. Just because a thing is old and less capable than the modern world does not mean it’s bad.
Okay, but could ChatGPT be used to vibe code a chess program that beats the Atari 2600?
no.
the answer is always, no.
The answer might be no today, but always seems like a stretch.
Sometimes it seems like most of these AI articles are written by AIs with bad prompts.
Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there’s no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.
LLMs on the other hand, are very good at producing clickbait articles with low information content.
Gotham chess has a video of making chatgpt play chess against stockfish. Spoiler: chatgpt does not do well. It plays okay for a few moves but then the moment it gets in trouble it straight up cheats. Telling it to follow the rules of chess doesn’t help.
This sort of gets to the heart of LLM-based “AI”. That one example to me really shows that there’s no actual reasoning happening inside. It’s producing answers that statistically look like answers that might be given based on that input.
For some things it even works. But calling this intelligence is dubious at best.
Hallucinating 100% of the time 👌
ChatGPT versus Deepseek is hilarious. They both cheat like crazy and then one side jedi mind tricks the winner into losing.
So they are both masters of troll chess then?
See: King of the Bridge
I think the biggest problem is it’s very low ability to “test time adaptability”. Even when combined with a reasonning model outputting into its context, the weights do not learn out of the immediate context.
I think the solution might be to train a LoRa overlay on the fly against the weights and run inference with that AND the unmodified weights and then have an overseer model self evaluate and recompose the raw outputs.
Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.
Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.
It is.
It’s really common for non-language implementations of neural networks. If you have an NN that’s right some percentage of the time, you can often run it through a bunch of copies of the NNs and take the average and that average is correct a higher percentage of the time.
Aider is an open source AI coding assistant that lets you use one model to plan the coding and a second one to do the actual coding. It works better than doing it in a single pass, even if you assign the the same model to planing and coding.
Removed by mod
Because it doesn’t have any understanding of the rules of chess or even an internal model of the game state, it just has the text of chess games in its training data and can reproduce the notation, but nothing to prevent it from making illegal moves, trying to move or capture pieces that don’t exist, incorrectly declaring check/checkmate, or any number of nonsensical things.
In this case it’s not even bad prompts, it’s a problem domain ChatGPT wasn’t designed to be good at. It’s like saying modern medicine is clearly bullshit because a doctor loses a basketball game.
I imagine the “author” did something like, “Search http://google.scholar.com/ find a publication where AI failed at something and write a paragraph about it.”
It’s not even as bad as the article claims.
Atari isn’t great at chess. https://chess.stackexchange.com/questions/24952/how-strong-is-each-level-of-atari-2600s-video-chess
Random LLMs were nearly as good 2 years ago. https://lmsys.org/blog/2023-05-03-arena/
LLMs that are actually trained for chess have done much better. https://arxiv.org/abs/2501.17186Wouldn’t surprise me if an LLM trained on records of chess moves made good chess moves. I just wouldn’t expect the deployed version of ChatGPT to generate coherent chess moves based on the general text it’s been trained on.
I wouldn’t either but that’s exactly what lmsys.org found.
That blog post had ratings between 858 and 1169. Those are slightly higher than the average rating of human users on popular chess sites. Their latest leaderboard shows them doing even better.
https://lmarena.ai/leaderboard has one of the Gemini models with a rating of 1470. That’s pretty good.
So, it fares as well as the average schmuck, proving it is human
/s