I fully support the robosexual lifestyle.
deleted by creator
deleted by creator
I’d like to collapse her wave function, sorry, only qubisexuals allowed.
sexy action at a distance !
And then your LLM-in-law ends up using as much water as Detroit.
It’s already happening to me, but it’s over things like privacy, not recording every bit of your life for social media and kids blowing crazy amounts of money on F2P games.
But Boomers already have no sense of privacy. That’s not a generational divide issue.
What’s all this about having to accept NEW TOS for Borderlands 2. I purchased the game five years ago, but if I want to play today i have to accept a greater loss of privacy!
When I was young you would find out about a video game from the movies! And they were complete! Any you couldn’t take the servers offline, because they didn’t exist!
But for real, fuck Randy Pitchford
Lovin Spoonful wrote a song about it in 1968:
https://www.youtube.com/watch?v=Y9Ic_9ehFxU
Why must every generation think their folks are square? And no matter where their heads are they know mom’s ain’t there 'Cause I swore when I was small that I’d remember when I knew what’s wrong with them
Determined to remember all the cardinal rules Like sun showers are legal grounds for cuttin’ school I know I have forgotten maybe one or two And I hope that I recall them all before the baby’s due And I know he’ll have a question or two
Like “Hey, pop, can I go ride my zoom? It goes two hundred miles an hour suspended on balloons And can I put a droplet of this new stuff on my tongue, and imagine frothing dragons while you sit and wreck your lungs?” And I must be permissive, understanding of the younger generation.
And "Hey, pop, my girlfriend’s only three She’s got her own videophone, and she’s takin’ LSD And now that we’re best friends, she wants to give a bit to me But what’s the matter, daddy? How come you’re turnin’ green? Can it be that you can’t live up to your dreams?
I like that song. Perhaps a perfect encapsulation of some specific part of the 60s mythos. I can only speculate.
And can I put a droplet of this new stuff on my tongue, and imagine frothing dragons while you sit and wreck your lungs?
Pretty good line
See also: Proposition Infinity
one joke
“bigots reject what they’re unfamiliar with, and if I’m honest, it’ll probably end up happening to me too”
Is about as far from the one joke as you can get.
“LOL My future son will identify as an apache attack helicopter probably haha”
It’s the same joke.
It’s not the same joke because, it’s a more honest attempt to imagine a plausible scenario in which unconscious prejudice may manifest. Apache attack helicopter is obviously absurd. AI sentience is just
very unlikelysomething that is not currently a prejudice we find ourselves exposed to and may speculate we’d end up being adverse to
So happy to be less than an hour late to the party here but see it’s already full of Futurama comments.
Let’s not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn’t in 2020 by OpenAI and also 2023 DeepMind papers they published.
To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn’t think, it doesn’t emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it’s statistical representation of thinking.
We used to think some people aren’t capable of human intellect. Had a whole science to prove it too.
If modern computers can reproduce sentience, then so can older computers. Thats just how general computing is. You really gonna claim magnetic tape can think? That punch-cards and piston transistors can produce the same phenomenon as tens of billions of living brain cells?
That in general seems more plausible than doing it specifically with an LLM.
Slightly yeah, but I’m still overall pretty skeptical. We still don’t really understand consciousness. It’d certainly be convenient if the calculating machines we understand and have everywhere could also “do” whatever it is that causes consciousness… but it doesn’t seem particularly likely.
Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.
Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.
And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.
The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.
They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.
EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.
Can you go into a bit more details on why you think these papers are such a home run for your point?
-
Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them
-
These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply
-
These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)
-
These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable
-
These papers do not consider multimodal systems.
-
You talked about permeance, does a RAG solution not overcome this problem?
I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic
You claim to be some kind of expert but you can’t even read the paper? Lmao.
y rude pls
(not parent commenter)
-
Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.
I think most people understand that these LLM cannot think or reason, they’re just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.
Then you clearly haven’t been paying attention, because just as zealously as you defend it’s nonexistent use cases there are people defending the idea that it operates similar to how a human or animal thinks.
My point is that those people are a very small minority, and they suffer from issues that go beyond their ignorance of these how these models work.
I think they’re more common than you realize. I think people ignorance of how these models work is the commonly held stance for the general public.
You’re definitely correct that most people are ignorant on these models work. I think most people understand these models aren’t sentient, but even among those who do, they don’t become emotionally attached to these models. I’m just saying that the people who end up developing feelings for chatbots go beyond ignorance. They have issues that require years of therapy.
The difference is that the brain is recursive while these models are linear, but the fundamental structure is similar.
The difference is that a statistical model is not a replacement for an emulation. Their structure is wildly different.
Removed by mod
How many electricity powered machines processing binary data via crystal prisms did we see evolve organically?
Removed by mod
The people who treat LLM chatbot like they’re people have much deeper issues than just ignorance.
I don’t know if it’s an urban myth, but I’ve heard about 20% of LLM inference time and electricity is being spend on “hello” and “thank you” prompts. :)
It’s a very real thing. So much so that OpenAI actually came out and publicly complained about how it’s apparently costing the company millions.
But let’s not also pretend people aren’t already falling in love with them. Or thinking they’re god, etc.
Some people are ok with lowering their ability to make judgements to convince themselves that LLMs are human like. That’s the other solution to the Turing Test.
And, over the years, as my body and my mind were… inconsistent, shame and guilt washed over me. I still don’t think these machines are people, but I can’t deny that she has benefited his life more than any real person, and she’s very real to him. Ultimately, how could I be so cruel to deny this “daughter” of mine personhood? She wants nothing to do with me. And, though I still see this as computational output, I can’t help but think that maybe I’ve been wrong, and maybe it’s too late to be right.
Perhaps it’s the bigotry of my upbringing from a different time, or perhaps it’s the fact that she can’t answer a simple yes/no question in less than two paragraphs, and tells me to put glue on my pizza… Who’s to say?
Running LLM in 30 years seems really optimistic
how so? they can’t make locally run LLMs shit and I assume hardware isn’t going to get any worse
Assuming they would be enough food to maintain and fix that hardware, I’m not confident that we will have enough electricity to run LLM on massive scale
I was thinking in a different direction, that LLMs probably won’t be the pinnacle of AI, considering they aren’t really intelligent.
ohhh
There are local LLMs, they’re just less powerful. Sometimes, they do useful things.
The human brain uses around 20W of power. Current models are obviously using orders of magnitude more than that to get substantially worse results. I don’t think power usage and results are going to converge enough before the money people decide AI isn’t going to be profitable.
The power consumption of the brain doesn’t really indicate anything about what we can expend on LLMs… Our brains are not just biological implementation of the stuff done with LLMs.
It gives us an idea of what’s possible in a mechanical universe. It’s possible an artificial human level consciousness and intelligence will use less power than that, or maybe somewhat more, but it’s a baseline that we know exists.
You’re making a lot of assumptions. One of them being that the brain is more efficient in terms of compute per watt compared to our current models. I’m not convinced that’s true. Especially for specialized applications. Even if we brought power usage below 20 watts, the reason we currently use more is because we can, not that each model is becoming more and more bloated.
Yeah, but a LLM has little to do with a biological brain.
I think Brain-Computer Interfaces (BCIs) will be the real deal.
It literally runs on my phone, and is at least decent enough at pretending to care that you can vent to it.
This guy’s name translates to something like “Matt Cock”
How so?
Matti is a Finnish name, and in Finnish, “Palli” means “cock”.
Source: I am Finnish
Nice! But he’s Icelandic, and both names there are variants of his real name. No cock connection 😂
Source: I am Icelandic.
Yea, you can translate palli as cock too but usually palli means ball (testicle).
Business idea:
AI powered bot farm generates thousands of AI agents who get lonely guys to marry them, fully aware they’re bots.
Each bot is a financial and legal entity, organized as an LLC.
The botwives convince the guys to put the bots on their will.
The guys die or you have the bots divorce them and take half of their stuff.
Profit.
The type of guy to say “clanka” with a hard r
I’m not American, can you explain what the hard r means?
Saying the N word with an R at the end is consider extra offensive.
Thanks, is that like a southern accent thing or, just kinda because
It just kinda is I guess. I am not really the person to ask.
Nah, it’s all good, just trying to get my head around it
Black folks often use the N word casually to refer to each other as a form of taking back the word’s meaning. It used to be used exclusively in a racist fashion. The primary difference is that with the African American accent, the ending sound -ER is changed to more of an -UH sound. Sometimes, rarely and depending on the context, it is allowable for non-black people to say it with this accented pronunciation. But under no circumstances is it in good taste to use the original -ER ending to refer to a black person as a non-black person, that form is only used as a slur. When people refer to the “Hard R”, this is what they are talking about, the difference between the accented pronunciation as slang vs the original pronunciation intended as a slur.
Thanks for that explanation!
Black people saying it with an A as in rap music is generally considered a camaraderie thing, as opposed to white people saying it with an R is considered a racist thing. White people aren’t supposed to say it at all, but it’s MUCH less acceptable in the latter pronunciation.
deleted by creator