The problem is not so much badly written programs and websites in terms of algorithms, but rather latency. The latency of loading things from storage, sometimes through the internet is the real bottleneck and why things feel so slow.
even with mothern ssds, things sometimes feel slower than what they were with hdds with time accurate software, windows 7 was snappy even on a hdd, windows 11 is slow and sluggish everywhere
Had to install (an old mind you, 2019) visual studio on windows…
…
…
First it’s like 30GB, what the hell?? It’s an advanced text editor with a compiler and some …
Crashed a little less than what I remember 🥴😁
First it’s like 30GB, what the hell??
Just be grateful it’s SSD and not RAM.
Visual Studio is the IDE. VS Code is the text editor.
OP was clearly using a rhetorical reduction to make a point that VS is bloated.
Visual code is another project, visual studio is indeed an IDE but it integrates it all. Vscode is also an integrated development environment. I don’t really know what more to say.
VS Code is considered a highly extensible text editor that can be used as an IDE, especially for web based tools, but it isnt an IDE. It’s more comparable to Neovim or Emacs than to IntelliJ in terms of the role it’s supposed to fill. Technically. VS Code definitely is used more as an IDE by most people, and those people are weak imo. I’m not one to shill for companies (i promise this isnt astroturf) but if you need to write code Jetbrains probably has the best IDE for that language. Not always true but moee often than not it is imo.
Ooh, a flame war 🔥🔥🔥 ! It has been so long since I was involved in one, thank you 🙋🏻♀️! 😊
Who uses visual code to something else than writing and launching vode? I only uses it for C#/Godot on Linux but it has all the bells and whistles to make it an IDE IMO (BTW anyone who doesn’t code in C/C++ is weak ofc ☺️! 🔥).
Let me just add that jetbrains (at least pycharm) have started their enshittification death cycle, and I’m looking for a lightweight python IDE that doesn’t hallucinate (but lets you use venm and debug), if you have any ideas I’m listening!
Cheers
I wanna clarify that when i say VS Code I’m talking about Visual Studio Code. I was only commenting on the difference between Visual Studio and Visual Studio Code because you said you downloaded Visual Studio and was confused why a text editor was 30gb, and it’s possible you downloaded the IDE rather than the text editor. I apologize if you thought i was talking about Visual Code; I wasn’t.
And i agree that JetBrains has started to enshittify but I also think their enshittification has been pretty slow because they sell professional tools that still have to perform the basic functionality of an IDE. And for the modt part I’ve been able to disable all AI features save the ones I’m required to use at work (yay AI usage metrics ;-;)
For data science, Spyder is good. Otherwise I also use pyzo as a lightweight IDE.
Will check out, thanks!
I dislike a lot the framing of this.
Yes, the average software runs much less efficient. But is efficiency what the user want? No. It is not.
How many people will tell you that they stick to windows instead of switching to linux because linux is all terminal? And terminal is quicker, more efficient for most things. But the user wants a gui.
And if we compare modern gui to old gui… I don’t think modern us 15x worse.
There isn’t anything fundamentally slower about using a GUI vs just text in a console. There’s more to draw but it scales linearly. The drawing things on the screen part isn’t the slow bit for slow programs. Well, it can be if it’s coded inefficiently, but there are plenty of programs with GUIs that are snappy… Like games, which generally draw even more complex things than your average GUI app.
Slow apps are more likely because of an inefficient framework (like running in a web browser with heavy reliance on scripts rather than native code), inefficient algorithms that scale poorly, poor resource use, bad organization that results in doing the same operation more times than necessary, etc.
The terminal is quicker. Not because of the image is drawn more quickly but because it is more efficient to do anything.
Can you elaborate on that? I disagree but would like to understand why you think that. Maybe you’re referring to something I wouldn’t disagree with.
E.g. From the terminal, I open a known file far more quickly then through an gui. Even if I want to use a gui for the file, issuing the opening command is quicker in the terminal.
GUIs often require the user to scan the interface to find the relevant information as the developer didn’t know what you are actually searching.
With a terminal, the user can be much more precise in what they seeking and consequently, less information is provided and less information needs to be scanned by the user.
The average user doesn’t want to remember and type a specific phrase to do something though. Even if it is “faster” and more “efficient”, the user want to be guided towards the information. The user wants a good user experience, not a fast/efficient one.
Pretty and guided, that is what the average user wants. Modern software is pretty and guided, not efficient and fast. Yes, developer became lazy in optimisation and like to use some big framework to save dev time. But the user also wanted it that way by wanting pretty GUIs because that is easier with the big frameworks.
Ah, that’s efficiency of use and depends more on how familiar you are with the software as well as the design and task. Like editing an image or video is going to be a lot easier with a gui than a command line interface (other than generating slop I guess).
When people talk about how efficient software is, it’s usually referring more to the amount of resources it uses (including time) to run its processes.
Eg an electron app is running a browser that is manipulating and rendering html elements running JavaScript (or other scripts/semi-compiled code). There is an interpreter that needs to process whatever code it is to do the manipulation and then an html renderer to turn that into an image to display on the screen. The interpreter and renderer run as machine code on the CPU, interacting with the window manager and the kernel.
A native app doesn’t bother with the interpreter and html renderer and itself runs as machine code on the CPU and interacts with the window manager and kernel. This saves a bunch of memory, since there isn’t an intermediate html state that needs to be stored, and time by cutting out the interpreter and html render steps.
I know. That is why I started my statement by stating that I don’t like the framing. It treats “efficiency” as the point of software. As the thing, that we should care about when judging software.
But it isn’t. It is user experience. And yes, efficiency is part of that. Both, efficiency in execution and efficiency of use.
And the user experience has improved a lot (ignoring intentional anti patterns to exploit the user that are fairly common, but i think we can agree to ignore that for the sake of the conversation)
Technically true, but there’s a threshold on responsiveness. If both user interfaces respond in milliseconds, it doesn’t matter if one is more efficient
It does because it highlights that instead of being excited to “have to use the terminal” as it is more “efficient” but instead they prefer the “slower” prettier gui. The user want the stupid animations and the flashy nonsense. The user doesn’t want quick software. They want pretty software.
But the user wants a gui.
Firstly, plenty of Linux instances have GUI. I installed Mint precisely because I wanted to keep the Windows/Mac desktop experience I was familiar with. GUIs add latency, sure. But we’ve had smooth GUI experiences since Apple’s 1980s OS. This isn’t the primary load on the system.
Secondly, as the Windows OS tries to do more and more online interfacing, the bottleneck that used to be CPU or open Memory or even Graphics is increasingly internet latency. Even just going to the start menu means making calls out online. Querying your local file system has built in calls to OneDrive. Your system usage is being constantly polled and tracked and monitored as part of the Microsoft imitative to feed their AI platforms. And because all of these off-platform calls create external vulnerabilities, the (abhorrently designed) antivirus and firewall systems are constantly getting invoked to protect you from the online traffic you didn’t ask for.
It’s a black hole of bloatware.
TVs became SmartTVs and now need the internet to turn on. The TVs need an OS now to internet to do TV.
Antennae broadcast TV seems like an ancient magic.
We’ve deprecated a lot of the old TV/radio signal bandwidth in order to convert it to cellphone signal service.
But, on the flip side, digital antennae can hold a lot more information than the old analog signals. So now I’ve got a TV with a mini-antennae that gets 500 channels (virtually none of which I watch). My toddler son has figured out how to flip the channel to the continuous broadcast of Baby Einstein videos. And he periodically hijacks the TV for that purpose, when we leave the remote where he can reach.
So there’s at least one person I can name who likes the current state of affairs.
I always have to remind myself being able to stream audio from a cellphone while driving across a city is also a pretty crazy development.
I am not saying linux is terminal. I am saying that people tell you that linux is all terminal and that they want a gui.
Linux gui is much prettier than Windows anyway.
I still remember playing StarCraft 2 shortly after release on a 300$ laptop and it running perfectly well on medium settings.
Looked amazing. Felt incredibly responsive. Polished. Optimized.
Nowadays it’s RTX this, framegen that, need SSD or loading times are abysmal, oh and don’t forget that you need 40gb of storage and 32gb of ram for a 3 hour long walking simulator, how about you optimize your goddamn game instead? Don’t even get me started on price tags for these things.
Software and game development is definitely a spectrum though, but holy shit is the ratio of sloppy releases so disproportionate that it’s hard to see it at times.
StarCraft 2 was released in 2007, and a quick search indicates the most common screen resolution was 1024x768 that year. That feels about right, anyway. A bit under a million pixels to render.
A modern 4K monitor has a bit over eight million pixels, slightly more than ten times as much. So you’d expect the textures and models to be about ten times the size. But modern games don’t just have ‘colour textures’, they’re likely to have specular, normal and parallax ones too, so that’s another three times. The voice acting isn’t likely to be in a single language any more either, so there’ll be several copies of all the sound files.
A clean Starcraft 2 install is a bit over 20 GB. ‘Biggest’ game I have is Baldur’s Gate 3, which is about 140 GB, so really just about seven times as big. That’s quite good, considering how much game that is!
I do agree with you. I can’t think of a single useful feature that’s been added to eg. MS Office since Office 97, say, and that version is so tiny and fast compared to the modern abomination. (In fact, in a lot of ways it’s worse - has had some functionality removed and not replaced.) And modern AAA games do focus too much on shiny and not enough on gameplay, but the fact that they take a lot more resources is more to do with our computers being expected to do a lot more.
Excel is sooo much than it used to be in Office 97. And it’s way better than any other spreadsheet software I’ve tried.
Speaking of, anyone know of any alternative that handles named tables the same as Excel? Built-in filtering/sorting and formulas that can address the table itself instead of a cell range?? Please?
SQL?
Seriously. If you are talking about querying tables, Excel is the wrong tool to use. You need to be looking at SQL.
I’ve been hosting grist for a while and it is quite nice. Wasn’t able to move all the stuff from classic spreadsheets though
I’ll check that out, thanks!
Why are you comparing the most common screen resolution in 2007 to a 4k monitor today? 4k isn’t the most common today. This isn’t a fair comparison.
1080p is still the most common, though is 1440p is catching up very fast
BTW the demand for bigger screens and bigger resolutions is something I don’t easily understand. I notice some difference between 1366x768 and 1920x1080 on a desktop, but the difference from further increase is of so little use for me I’d classify it as a form of bloat. If anything, I now habitually switch to downloading 480p and 720p instead of higher definition by default because it saves me traffic and battery power, and fits much more on a single disk easy to back up.
the main thing I noticed with a 768p monitor was gnome being unusable thanks to their poor ui density
Pixel density is more important than resolution. Higher resolution is only useful outside of design work if the screen size matches
IMO the ideal resolutions for computer monitors is 24" @ 1080p, 27" @ 2k, and 32"+ at 4k+. For TV it’s heavily dependant on viewer distance. I can’t tell the difference between 2k and 4k on my 55" TV from the couch.
‘Biggest’ game I have is Baldur’s Gate 3, which is about 140 GB, so really just about seven times as big. That’s quite good, considering how much game that is!
Not at all. For example, Rimworld saves all the map and world data in one big XML (which is bad btw, don’t do that): about 2 million lines @75 MB, for a 30-pawns mid-game colony.
So you see, Data is not what uses space. But what uses space instead is, if you don’t properly re-use objects/textures (so called “assets”), or even copy and repack the same assets per level/map, because that saves dev time.
Ark Survival Evolved, with “only” about 100 GB requirement, was known as a unoptimized mess back then.
Witcher 3 mod “HD Reworked Next-Gen” has barely 20 GB with 4k textures and high-res meshes. And you can’t say that Witcher 3 is not a vibrant and big open world game.
Absolutely. Every time I play a game from before 2016 or so it runs butter smooth and looks even better than modern games in many cases. I don’t know what we’re doing nowadays.
Then factorio dev blog comes in and spend months optimizing the tok of one broken gear in the conveyor belt to slightly improve efficiency.
Tbf, there’s saves there that efficiency increase means a lot
Comparing a 20 year old game with FMV sequences at 1080p is certainly a take 🤣.
deleted by creator
PCs aren’t faster, they have more cores, so they can do more at a time, but it takes effort to optimize for parallel work. Also the form factor keeps getting smaller, more people use laptops now and you can’t cheat thermal efficiency.
It’s all about memory latency and bandwidth now which has improved greatly PC’s are still getting faster. There is a new RAM standards being pushed right now CAMM2 is really exciting it pushes back the need for soldered memory.
They often are worse, because everything needed to be an electron app, so they could hire the cheaper web developers for it, and also can boast about “instant cross platform support” even if they don’t release Linux versions.
Qt and GTK could do cross platform support, but not data collection, for big data purposes.
There’s no difference whatsoever between qt or gtk and electron for data collection. You can add networking to your application in any of those frameworks.
I don’t know why electron has to use so much memory up though. It seems to use however much RAM is currently available when it boots, the more RAM system has the more electron seems to think it needs.
Chromium is basically Tyrone Biggums asking if y’all got any more of that RAM, so bundling that into Electron is gonna lead to the same behavior.
Ib4 “uNusEd RAm iS wAStEd RaM!”
No, unused RAM keeps my PC running fast. I remember the days where accidentally hitting the windows key while in a game meant waiting a minute for it to swap the desktop pages in, only to have to swap the game pages back when you immediately click back into it, expecting it to either crash your computer or probably disconnect from whatever server you were connected to. Fuck that shit.
I mean unused RAM is still wasted: You’d want all the things cached in RAM already so they’re ready to go.
I mean I have access to a computer with a terabyte of RAM I’m gonna go ahead and say that most applications aren’t going to need that much and if they use that much I’m gonna be cross.
Wellll
If you have a terabyte of RAM sitting around doing literally nothing, it’s kinda being wasted. If you’re actually using it for whatever application can make good use of it, which I’m assuming is some heavy-duty scientific computation or running full size AI models or something, then it’s no longer being wasted.
And yes if your calculator uses the entire terabyte, that’s also memory being wasted obviously.
That’s a different definition of wasted though. The RAM isn’t lost just because it isn’t being currently utilised. It’s sitting there waiting for me to open a intensive task.
What I am objecting to is programs using more RAM than they need simply because it’s currently available. Aka chromium.
I don’t want my PC wasting resources trying to guess every possible next action I might take. Even I don’t know for sure what games I’ll play tonight.
Well you’d want your OS to cache the start menu in the scenario you highlighted above. The game could also run better if it can cache assets not currently in use instead of waiting for the last moment to load them. Etc.
Yeah, for things that will likely be used, caching is good. I just have a problem with the “memory is free, so find more stuff to cache to fill it” or “we have gigabytes of RAM so it doesn’t matter how memory-efficient any program I write is”.
“memory is free, so find more stuff to cache to fill it”
As long as it’s being used responsibly and freed when necessary, I don’t have a problem with this
“we have gigabytes of RAM so it doesn’t matter how memory-efficient any program I write is”
On anything running on the end user’s hardware, this I DO have a problem with.
I have no problem with a simple backend REST API being built on Spring Boot and requiring a damn gigabyte just to provide a /status endpoint or whatever. Because it runs on one or a few machines, controlled by the company developing it usually.
When a simple desktop application uses over a gigabyte because of shitty UI frameworks being used, I start having a problem with it, because that’s a gigabyte used per every single end user, and end users are more numerous than servers AND they expect their devices to do multiple things, rather than running just one application.
Dunno this paradox theory, but the impression I get is that when you’re part of the process, it’s harder to notice changes. But if putting the two devices side by side, trying to run the same systems, programs, etc., the difference is glaring. And from tests I did, if the software doesn’t work on either of the devices, slapping a VM on the newer one to test older programs still tells quite a lot.
You do really feel this when you’re using old hardware.
I have an iPad that’s maybe a decade old at this point. I’m using it for the exact same things I was a decade ago, except that I can barely use the web browser. I don’t know if it’s the browser or the pages or both, but most web sites are unbearably slow, and some simply don’t work, javascript hangs and some elements simply never load. The device is too old to get OS updates, which means I can’t update some of the apps. But, that’s a good thing because those old apps are still very responsive. The apps I can update are getting slower and slower all the time.
It’s the pages. It’s all the JavaScript. And especially the HTML5 stuff. The amount of code that is executed in a webpage these days is staggering. And JS isn’t exactly a computationally modest language.
Of the 200kB loaded on a typical Wikipedia page, about 85kb of it is JS and CSS.
Another 45kB for a single SVG, which in complex cases is a computationally nontrivial image format.
I don’t agree. It’s both. I’ve opened basic no JS sites on old tablets to test them out and even those pages BARELY load
What caused the latency in that case?
Probably just the browser itself, considering how bloated they’re getting. It’s not super surprising, considering the apps run about as fast (on a good day) as it did 5-10 years ago on a new phone, it’s gonna run like dogshit on a phone from that era.
I can’t update YouTube on my iPad 2 that I got running again for the first time in years. It said it had been 70,000~ hours since last full charge. I wanted to use it to watch videos on when I’m going to bed. But I can’t actually login to YouTube because the app is so old and I seemingly can’t update it.
I was using the web browser and yeah I don’t remember it being so damn slow. It’s crazy how that is.
Is your iPad on iOS 9.3.5? It is infamously slow.
It is possible to downgrade it to 8.4.1 (faster, partially more broken) or even 6.1.3 (fast and old school, many apps don’t work, but there are apps in Cydia to fix stuff).
Biggest issue I encountered is sites requiring TLSv1.3 for HTTPS encryption, and browsers simply do not support that.
I have an old YouTube app on my iPad, and it still works fine. One of the more responsive apps on the device. I get nagged nearly every time I use it to update to the newest YouTube release, but that’s impossible. I’d first have to upgrade my OS, and Apple no longer releases new OSes for this generation of iPads. So, I’m stuck with an old YouTube, which mostly works fine, and an occasional nag message.
I’m sure within a year or two mine will be like yours and YouTube will simply no longer work. But, for now it’s in a relatively good spot where I can use a version of YouTube designed for this particular hardware that doesn’t feel sluggish.
Websites are probably a better example; as the complexity and bloat has increased faster than the tech.
oh, yes, somebody made this a long time ago, in response to the performance of new webpages https://motherfuckingwebsite.com/
I love it
Well yeah, why would I learn html when I can learn React?!?
(/s but I actually did learn React before I had a grasp of semantic Html because my company needed React devs and only paid for React-specific education)
I feel like this is Windows specific. Linux is rapid on PCs and my MacBook is absurdly quick.
Mint Xfce on my 2015 laptop compared to its previous system was the difference between usable and waiting 10 minutes for it to even boot, and things like gaming, VMs, comically large spreadsheets (surprisingly the memory hog), etc., were an eternal challenge on it. On my current laptop, I have the luxury of picking the systems by aesthetics and non-optimization functions instead. And to compare, I’ve run even the same updates on the two laptops, as the older one still works.
App launch time can be annoyingly slow on mac if you’re not offline or blocking the server it phones home to
it can be the difference between one bounce or seven bounces of the icon on my end
What apps out of interest? I’m a new Mac owner, so limited experience, but everything seems insanely quick so far. Even something like Xcode is a one-bounce on this M4 Air.
All of them. The device has to phone home to apple to ask permission to run them.
to test close app (really shut, make sure dot on icon isnt glowing) then open and measure time
close app and then disconnect from the internet and launch again
the speed difference depends on how overloaded apples servers are.
Anyone opening the app menu (from the dock or Home Screen) on an iPad will tell you that it’s not exclusive to windows pcs.
And Android
PC games are software.
Unfortunately many PC games are also like this, astoundingly poorly optimized, just assume everyone has a $750 GPU.
Proton can only do so much.
… and Metal basically can’t do that that much.
Look at Metal Gear Solid 5 or TitanFall 2, and tell me realtime video game graphics have dramatically increased in visual fidelity in the last decade.
They haven’t really.
They shifted to a poorly optimized, more expensive paradigm for literally everyone involved; publisher, developer, player.
Everything relating to realtime raytracing and temporal antialiasing is essentially a scam, in the vast majority of actual implementations of it.
I guess the counter argument for games is load times have dramatically improved, though that’s less about software development than hardware improvements.
If we put consoles in the same bracket as computers, the literally instant quick-resume feature on an Xbox (for example) feels like sci-fi.
Yeah, you kinda defeated your own argument there, but you do seem to recognize that.
You can instant resume on a Steam Deck, basically.
You can alt tab on a PC, at least with a stable game that is well made and not memory leaking.
Yeah, better RAM / SSDs does mean lower loading times, higher streaming speeds/bus bandwidths, but literally, at what cost?
You could just actually take the time to optimize things, find non insanely computationally expensive ways to do things that are more clever, instead of just saying throw more/faster ram at it.
RAM and SSD costs per gig are going up now.
Moore’s Law is not only dead, it has inverted.
Constantly cheaper memory going forward turned out to not the best assumption to make.
With respect to OP’s post, they say “you can’t even tell the computers we are on are 15x faster…”, and I reckon that quick resume etc, is an example of “you absolutely can tell that we now have extremely fast hardware” when compared to what came before, irrespective of the quality of the software.
I’m not disagreeing with you, I’m just picking apart the blanket “computers feel the same as they did a decade ago”. Some computers might feel the same, and a lot of software might be unoptimised, but there’s a good selection of examples where that’s not the case.
For my home PC, sure. Running some windows apps on my Linux machine in wine is a little weird and sluggish. Discord is very oddly sluggish for known reasons. Proton is fine tho.
But for my work? Nah. My M3 MacBook Pro is a beast compared to even the last Intel MacBook. Battery is way better unless you’re like me and constantly running a front end UI for a single local service. But without that, it can last hours. My old one could only last 2 meetings before it started dying.
Apple put inadequate coolers in the later Intel Macbooks to make Apple Silicon feel faster by contrast. When I wake mine, loading the clock takes 1.5 seconds, and it flips back and forth between recognizing and not recognizing key presses in the password field for 12 seconds. Meanwhile, the Thinkpad T400 (running Arch, btw) that I had back in 2010 could boot in 8.5 seconds, and not have a blinking cursor that would ignore key presses.
Apple has done pretty well, but they aren’t immune from the performance massacre happening across the industry.
The battery life is really good, though. I get 10-14 hours without trying to save battery life, which is easily enough to not worry about whether I have a way to charge for a day.
The program expands so as to fill the resources available for its execution
– C.N. Parkinson (if he were alive today)
This entire thread is a perfect example of the paradox folks keep mentioning:
Nobody in both 🧵s pointed out that Ocean used Mastodon to post the banter with.
Plenty more optimized federated slop software in the market.I am also on Jabber, if it means anything to Zoomies.
I’m pretty sure the “unused RAM is wasted RAM” thing has caused its share of damage from shit developers who took it to mean use memory with reckless abandon.
Would be nice if I could force programs to use more ram though. I actually have 100GB of DDR4 my desktop. I bought it over a year ago when DDR4 was unloved and cheap. But I have tried to force programs to not be offloading as much. Like Firefox, I hate that I have the ram but it’s still unloading webpages in the background and won’t use more than 6GB ever.
I actually have 100GB of DDR4
They’ve got RAM! Get’em!
Programs that care about memory optimization will typically adapt to your setup, up to a point. More ram isnt going to make a program run any better if it has no use for it
Set swappiness to 5 or something similar, or disable swap altogether unless you’re regularly getting close to max usage
RAM disk is your friend.
Will disabling the swap file fix that?
If not, just mount your swap file in RAM lmao
Don’t fully disable swap on Windows, it can break things :-/
I didn’t know that, that used to not be the case.
Maybe it has changed again, but in the past I gave it a try. When 16 GB was a lot. Then when 32 GB was a lot. I always thought “Not filling up the RAM anyway, might as well disable it!”
Yeah, no, Windows is not a fan. Like you get random “running out of memory” errors, even though with 16 GB I still had 3-4 GB free RAM available.
Some apps require the page file, same as crash dumps. So I just set it to a fixed value (like 32 GB min + max) on my 64 GB machine.
In most cases, you either optimize the memory, or you optimize the speed of execution.
Having more memory means we can optimize the speed of execution.
Now, the side effect is that we can also afford to be slower to gain other benefits: Ease of development (come in javascript everywhere, or python) at the cost of speed, maintainability at the cost of speed, etc…
So, even though you dont always see performance gains as the years go, that doesn’t mean shit devs, it means the priority is somewhere else. We have more complex software today than 20 years ago because we can afford not to focus on ram and speed optimization, and instead focus on maintainable, unoptimized code that does complex stuff.
Optimization is not everything.
unoptimized code that does complex stuff.
You can still have complex code that is optimized for performance. You can spend more resources to do more complex computations and still be optimized so long as you’re not wasting processing power on pointless stuff.
For example, in some of my code I have to get a physics model within 0.001°. I don’t use that step size every loop, because that’d be stupid and wasteful. I start iterating with 1° until it overshoots the target, back off, reduce the step to 1/10, and loop through that logic until I get my result with the desired accuracy.
Of course! But sometimes, most often even, the optimization is not worth the development to get it. We’re particularly talking about memory optimization here, and it is so cheap (or at least it was… ha) that it is not worth optimizing like we used to 25 years ago. Instead you use higher level languages with garbage collection or equivalents that are easier to maintain with and faster to implement new stuff with. You use algorithms that consume a fuck ton of memory for speed improvements. And as long as it is fast enough, you shouldn’t over optimize.
Proper optimization these days is more of a hobby.
Now obviously some fields require a lot more optimization - embedded systems, for instance. Or simulations, which get a lot of value from being optimized as much as possible.
With 32 and 64 GB systems I’ve never run out of RAM, so the RAM isn’t the issue at all.
Optimization just sucks.
Have you ever tried running a decent sized LLM locally?
Decent sized for what?
Creative writing and roleplay? Plenty, but I try to fit it into my 16 GB VRAM as otherwise it’s too slow for my liking.
Coding/complex tasks? No, that would need 128GB and upwards and it would still be awfully slow. Except you use a Mac with unified memory.
For image and video generation you’d want to fit it into GPU VRAM again, system RAM would be way too slow.
I use a Mac with unified memory, so that distinction slipped my mind.
The same? Try worse. Most devices have seen input latency going up. Most applications have a higher latency post input as well.
Switching from an old system with old UI to a new system sometimes feels like molasses.
I work in support for a SaaS product and every single click on the platform takes a noticeable amount of time. I don’t understand why anyone is paying any amount of money for this product. I have the FOSS equivalent of our software in a test VM and its far more responsive.
I want to avoid building react native apps.
Except for KDE. At least compared to cinnamon, I find KDE much more responsive.
AI generated code will make things worse. They are good at providing solutions that generally give the correct output but the code they generate tends to be shit in a final product style.
Though perhaps performance will improve since at least the AI isn’t limited by only knowing JavaScript.
I still have no idea what it is, but over time my computer, which has KDE on it, gets super slow and I HAVE to restart. Even if I close all applications it’s still slow.
It’s one reason I’ve been considering upgrading from6 cores and 32 GB to 16 and 64.
Have you tried disabling the file indexing service? I think it’s called Baloo?
Usually it doesn’t have too much overhead, but in combination with certain workflows it could be a bottleneck.
Upgrade isn’t likely to help. If KDE is struggling on 6@32, you have something going on that 16@64 is only going to make it last twice as long before choking.
wail till it’s slow
Check your Ram / CPU in top and the disk in iotop, hammering the disk/CPU (of a bad disk/ssd) can make kde feel slow.
plasmashell --replace # this just dumps plasmashell’s widgets/panels
See if you got a lot of ram/CPU back or it’s running well, if so if might be a bad widget or panel
if it’s still slow,
kwin_x11 --replace
or
kwin_wayland --replace &
This dumps everything and refreshes the graphics driver/compositor/window manager
If that makes it better, you’re likely looking at a graphics driver issue
I’ve seen some stuff where going to sleep and coming out degrades perf
Hmm, I haven’t noticed high CPU usage, but usually it only leaves me around 500MB actually free RAM, basically the entire rest of it is either in use or cache (often about 15 gigs for cache). Turning on the 64 gig swapfile usually still leaves me with close to no free RAM.
I’ll see if it’s slow already when I get home, I restarted yesterday. Then I’ll try the tricks you suggested. For all I know maybe it’s not even KDE itself.
Root and home are on separate NVMe drives and there’s a SATA SSD for misc non-system stuff.
GPU is nvidia 3060ti with latest proprietary drivers.
The PC does not sleep at all.
To be fair I also want to upgrade to speed up Rust compilation when working on side projects and because I often have to store 40-50 gigs in tmpfs and would prefer it to be entirely in RAM so it’s faster to both write and read.
Don’t let me stop you from upgrading, that’s got loads of upsides. Just suspecting you still have something else to fix before you’ll really get to use it :)
It CAN be ok to have very low free ram if it’s used up by buffers/cache. (freeable) If Buff/cache gets below about 3GB on most systems, you’ll start to struggle.
If you have 16GB, it’s running low, and you can’t account for it in top, you have something leaking somewhere.
Lol I sorted top by memory usage and realized I’m using 12 gigs on an LLM I was playing around with to get local code completion in my JetBrains IDE. It didn’t work all that well anyway and I forgot to disable it.
I did have similar issues before this too, but I imagine blowing 12 gigs on an LLM must’ve exacerbated things. I’m wondering how long I can go now before I’m starting to run out of memory again. Though I was still sitting at 7 gigs buffer/cache and it hadn’t slowed down yet.
12/16, That’ll do it. Hopefully that’s all, good luck out there and happy KDE’ing
I’ve seen some stuff where going to sleep and coming out degrades perf
I’ll have to try some of these suggestions myself, as I’ve been dealing with my UI locking up if the monitors turn off and I wake it up too soon. Sometimes I still have ssh access to it, so thanks for the shell commands!
I was doing horrible things the other day and ended up with my KDE login page not working when I came out of sleep.
CTRL+ALT+F2 > text login > loginctl unlock-sessions
I’m aware of the TUI logins (I think f7 is your graphical, but I might be wrong) and sometimes those work too. I’ve started just sshing in because the terminal switching was hit and miss.
But thanks for that loginctl command, I’ll have to give that one a try as well!
F7 is generally right, some distros change it up (nixos is 3)
Have you gone through settings and disabled unnecessary effects, indexing and such? With default settings it can get quite slow but with some small changes it becomes very snappy.
I have not, but also it’s not slow immediately, it takes time under use to get slow. Fresh boot is quite fast. And then once it’s slow, even if I close my IDE, browsers and everything, it remains slow, even if CPU usage is really low and there’s theoretically plenty of memory that could be freed easily.
Have you tried disabling all local Trojans and seeing if that helps?
I switched to Durex, seems to be faster now, thanks!
I have a 2 core, 2 thread, 4gb RAM 3855u Chromebook that I installed Plasma on, and it’s usually pretty responsive.


















