Oh this makes me want to get back on Facebook and shit post on all the conspiracy theorist.
Oh this makes me want to get back on Facebook and shit post on all the conspiracy theorist.
Is this the trojan horse V2?!?!


I think the word you’re looking for is “delusional”


"Because I fucking hate my privacy, and Lemmy and other FOSS media platforms is like veggies on my dinner plate – I don’t want it.
I want people to know when I get my first boner, when I inevitably kick the bucket, and when i announce I got a new position (while users on the platform give context that I was hard the entire interview process). Because why celebrate with family and friends when I got the whole internets asshole comments to read and respond to."
This is my delusional interpretation of why users don’t join Lemmy.


I want to believe you, but the people at my school are abusing it a lot, to the point where i they just give an entire assignment through chatGPT and it gives them a solution.
The only time I see where it didn’t fully work is using it for my skip list implementation. I asked a LLM to implement a skiplist with insert, delete, and get functionality. What it gave me is an implementation that traversed through the list as a standard linked list: it is unaware of the time complexity concept associated with the skiplist, and implements it as a standard O(1) linked list. It works, but it doesn’t incorporate the “skipping” of nodes. I wonder how many student are shitting in their pants when they realize that the time isn’t being reduced compared to a standard linked list.


No, my intention wasn’t to undermine the value of a degree. I’m saying most people priorities for getting a degree, more specifically an engineering degree, is to just have a pay check. On a more related note, there’s a lot of “engineering majors” that use artificial intelligence to code, who don’t actually enjoy the process of learning at my uni.
So yea, at the rate of adoption and use of generative AI at my school, a pool boy can do what most of the sophomore engineers do.


You’d then be shocked by how many students at my university (in engineering btw) who simply take the assignment at hand, put it in chatGPT and submits it.


Oh trust me everything is predictable… At least in the US it is. Just look at openAI, they can somehow continue to manipulate the stock market in their favor by simply saying something related to “AI advancements”
No, just no. x is the variable for depth, y is the variable for width, and z is height. I learned that from multivariable calculus, no other convention is better.
Fuck you for showing me this, I’m now going to gauge my eyes out.
What does a Jewish president have to do with fascism?


If you were wondering if chatGPT can do a sum of minterms here’s one I derived on the fly, I’ve attached screenshots of the conversation we had…


I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.
First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.
There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.
On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.
This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.
The second reason why I don’t like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like “chatGPT-5 is terrifying” or “openAI has fired 70,000 employees over AI improvements” they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that’s how much I frown on these unethical practices.
I often joke about this to people who don’t believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they’ve fired X amount of employees for “AI improvements” why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than “the reason X amount of employees were fired was because of AI improvement”


I just mentioned to a friend of mine why I don’t use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.
First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it’s incapable of creating thoughts outside from the data it’s trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.
There are several issues I can think of that makes the LLM do poorly at it’s job. remember LLM’s are trained exclusively on the internet, as large as the internet is, it doesn’t have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT “whats the issue with my codebase” it will notice the code you provided isn’t what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.
On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.
This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it’s true, when I started


I’ve been using delta chat for about a year now and I will say I really do like it compared to signal.
For one thing, email encryption (yes the fucking bedbug of the internet) is being used here, it’s decentralized, and just recently (on android) they’ve added phone calling functionality… fucking phone calling functionality on email encryption.
I’ll say signal isn’t any safer (in terms of privacy and security) than WhatsApp, and i had a revelation that all centralized messaging services aren’t any better than WhatsApp even the proclaimed privacy focused ones. I have two reasons for this: 1.) They have the option to flick a switch and monetize their entire platform, that includes selling data to data brokers and other individuals. 2.) Because it is centralized it’s easier for hackers to breach and easier for governments to get user data.
I’m not saying that signal is monetizing their platform, but compared to their decentralized counterparts, they have the option to do so. Delta chat requires building a new messaging service from the ground up, if they wanted to monetize.
My only complaint is, since it is on email encryption, I can’t receive SMS messages, so everyone would have to transition to delta chat (atleast if you plan to use a chatmail) to get the same network as before. You can also create an account using your personal email and send messages via email.
Well that’s the thing about infinity, as n–> inf it is all more likely for someone to pull the lever than not contrary of the morality of the individual. Which concerns the dillema, do you pull the lever killing one person and ending the experiment, or do you double it, and give someone the opportunity to pull the lever and kill 2^n people. In the end it will happen assuming there is an infinite number of people having the same choice as you do.


Dark matter isn’t matter, I know shitty name to call something “matter” that isn’t matter, Dark matter is a force. The most common example where dark matter shows up is in astronomy, where galaxy positions aren’t where we calculated them to be, hence there is some external force that is being applied, that we don’t know and haven’t found a way to take into account. I guess we call it “dark matter” instead of “dark force” is because for a force to be applied there must be some mass. Still i think it’s illogical to assume that dark matter is a matter, because we don’t know what force is exerting on it. For all we know it could be the accumulation of other galaxies applying a force on the observed galaxy that we’re simply not taking into account.


Don’t use Firefox, as in, don’t use the official Mozilla release, even that has gone to shit. Pretty much everything has gone to shit, in terms of search results and web browsers. I use librewolf (a fork of firefox) on my laptop and Ironfox on my phone they both by default come with the security features by default. No AI generation built in the browser, no Firefox suggestions, no tracking, none of it. I’ve also stopped using standard search engines like google or DDG and replaced it with marginalia search. This combination has allowed me to eliminate AI generated content and tracking from my browsing experience.


If the proprietary CAD software is only offered on windows, You could always go with wine and install the application through that. Wine works well with most windows applications, only a few of them like proton drive is a bastard to install.


To clarify, as in setting a 1 billion dollar defense budget to traffic people into their head and tail relays in tor. But no government has had their head further up their ass to do such a batshit crazy thing.
225 * 2 = 450 ≠ 550. I initially thought it was 550 which makes sense since I was a math major. its been 3 years since I actually studied arithmetic leave me alone.