It’s given me an idea of how we get there. Clearly, modern LLMs aren’t near the level as seen in movies, but we will get there. We will move on from LLMs within a few years to a more adaptive model, as we further increase our understanding of AI and neural networks.
I see modern LLMs as task tools, they can interpret our requests to pass onto a more intelligent model type which will save processing power needed from the newer AIs.
People in this thread seem to have a lot of bias, they can’t see how the tech will evolve. You need to keep an open mind and look at where tech is being developed, with AI, it will be new architectures.
Their bias is a direct response to the rhetoric from the ‘leaders’ of the AI industry, who have collected billions of dollars and turned it into BS expectations.
I now consider it stupid and destructive to treat AI as having emotion just because they act human.
In other words, the bad guys in Blade Runner were right all along
Who’re the bad guys in Blade Runner? The giant corporation that creates human-like entities only to enslave them?
Blade Runner’s a bit different since the replicants are flesh and blood, just not naturally born.
No, but actually studying Artificial Intelligence a decade ago in college did.
We had language models back then, too, they just weren’t as good.
We calculated in the 70s that the algorithm on which the LLMs run will only get us so far. We’ve nearly reached that point. related article that basically covers it all: https://venturebeat.com/ai/llms-are-stuck-on-a-problem-from-the-70s-but-are-still-worth-using-heres-why
So basically no different view. Still waiting for my cyborg buddy.
Hate is a strong word… I feel like humans and machines coexist a little too well in the movies, except when the lack of coexistence IS the plot.
I keep thinking our AI will lead us to something like the Eloi of the Time Machine, and the Morlocks will be the machines that run everything.
In sci-fi, AI devices (like self-driving cars or ships, or androids) seem like an integrated unit where any controls or sensors they have are like human limbs and senses. The AI “wills” the engine to start. I always imagined AI would be like a single organism where neurons are connected directly to the body.
Given the development of LLM:s and how they are used, it now seems more likely that AI will be an additional “smart layer” on top of the dumb machinery, and actions are performed by emitting tokens/commands (“raise arm 35 degrees”) that are sent to API:s. The interaction will be indirect in the way that we control the TV with the remote.
deleted by creator
I think we might actually get Star Wars style droids in our lifetimes.
Real life LLMs have shown me the potential for the world to be just as miserable and dystopia as in a lot of sci-fi but also if this is where we are now, then maybe most sci-fi doesn’t take it far enough. People will stop thinking for themselves and rely on AI for everything and blindly believe what it tells them.
not at all, just as how boston dynamics’ atlas didn’t change how i viewed robocop.
text generators just have very little in common with intelligent, autonomous artificial entities.
My favoeite character is a robot, and while sometimes she sounds like an llm she’s much more than that. She actually learns how humans are and it’s beautiful and I love her
It hasn’t. I don’t know what am LLM is.
Then how can you be sure you haven’t been influenced?
It stands for Large Language Model, and that’s what ChatGPT, Gemini, Grok, etc are. They are all LLMs. They are also called ‘AI’ (Artificial Intelligence) but they are not at all intelligent, they just match patterns and produce one word at a time like a very complex autocomplete in a phone keyboard.
They very often get facts wrong, but they are designed to sound confident and knowledgeable even when completely incorrect, which is a problem because humans tend to assume honesty.
They’ve made fictional AI seem that much more far-fetched.
Obviously, we all learn by imitation and instruction - but LLMs have shown that’s only part of the puzzle
I think LLMs could provide a human friendly interface for robots. There’s a lot of interesting work happening with embodied AI now, and in my opinion embodiment is the key ingredient for making AI intelligent in a human sense. A robot has to interact with the environment and it builds an internal model of the world for making decisions. This creates a feedback loop where the robot can learn the rules of the world and do meaningful interaction, and that’s precisely what’s missing with LLMs.
So an LLM with realtime learning/updation?
This doesn’t really answer the question but I was reading Asimov short story the other day “Belief” and it felt like he’d hit the nail on the head such a long time ago.









