One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.
For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.
When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.
However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.
I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).
Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.
EDIT: Tried nice -n +19
, still lags my other programs.
EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.
The System76 scheduler helps to tune for better desktop responsiveness under high load: https://github.com/pop-os/system76-scheduler I think if you use Pop!OS this may be set up out-of-the-box.
I distro hop occasionally but always find myself coming back to popos. There are so many quality of life improvements that seem small but make all the difference.
Just use
-j, --jobs
: https://doc.rust-lang.org/cargo/commands/cargo-build.html#miscellaneous-optionsThe CPU is already 100% busy, so changing number of compilation jobs won’t help, CPU can’t go faster than 100%.
I actually tried that but I had to reduce it all the way to 4 jobs, which slows compilation down a lot.
Ha, that’s funny. When I run some Visual Studio builds on Windows it completely freezes at times.
Never have that issue on EOS with KDE.
Found this b for your problem of limiting one specific program such as rust compiler: https://askubuntu.com/questions/1367612/how-can-i-limit-the-cpu-and-ram-usage-for-a-process
OP most likely wants the opposite for the compiler…
I don’t really want to limit the Rust compiler. If I leave my computer running while I take a break, I don’t want it to artificially throttle the compiler. I just want user input and responsiveness of open windows to take priority over the compiler.
If it’s really maxing out, then nothing is going to make it feel faster. If I had to guess, Windows caps things somehow so they don’t fully max out.
If I had to guess, Windows caps things somehow so they don’t fully max out.
Well, no, Windows does the same thing where it goes to 100% usage on all the cores. It’s just that on Windows, if you start interacting with for instance your Firefox window, Windows will take some of the time that would have been allocated to the compiler and give it to Firefox instead, in a way that ensures that no lag or stuttering is experienced. It’s still at 100% CPU usage, it’s just about how that 100% is distributed.
You could try using nice to give the rust compiler less priority (higher number) for scheduling.
This seems too complicated if I need to do that for other programs as well.
You can just alias to do this in the programs you do use
Sure, the first time you won’t have this enabled, but after that it just works.
The Linux kernel uses the CPU default scheduler, CFS, a mode that tries to be fair to all processes at the same time - both foreground and background - for high throughput. Abstractly think “they never know what you intend to do” so it’s sort of middle of the road as a default - every CPU cycle of every process gets a fair tick of work unless they’ve been intentionally
nice
’d or whatnot. People who need realtime work (classic use is for audio engineers who need near-zero latency in their hardware inputs like a MIDI sequencer, but also embedded hardware uses realtime a lot) reconfigure their system(s) to that to that need; for desktop-priority users there are ways to alter the CFS scheduler to help maintain desktop responsiveness.Have a look to Github projects such as this one to learn how and what to tweak - not that you need to necessarily use this but it’s a good point to start understanding how the mojo works and what you can do even on your own with a few sysctl tweaks to get a better desktop experience while your rust code is compiling in the background. https://github.com/igo95862/cfs-zen-tweaks (in this project you’re looking at the set-cfs-zen-tweaks.sh file and what it’s tweaking in
/proc
so you can get hints on where you research goals should lead - most of these can be set with a sysctl)There’s a lot to learn about this so I hope this gets you started down the right path on searches for more information to get the exact solution/recipe which works for you.
The Linux kernel uses the CPU default scheduler, CFS,
Linux 6.6 (which recently landed on Debian) changed the scheduled to EEVDF, which is pretty widely criticized for poor tuning. 100% busy which means the scheduler is doing good job. If the CPU was idle and compilation was slow, than we would look into task scheduling and scheduling of blocking operations.
I’d say
nice
alone is a good place to start, without delving into the scheduler rabbit hole…I would agree, and would bring awareness of
ionice
into the conversation for the readers - it can help control I/O priority to your block devices in the case of write-heavy workloads, possibly compiler artifacts etc.
“they never know what you intend to do”
I feel like if Linux wants to be a serious desktop OS contender, this needs to “just work” without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.
One of my biggest frustrations with Linux. You are right. If I have something that works out of the box on windows but requires hours of research on Linux to get working correctly, it’s not an incentive to learn the complexities of Linux, it’s an incentive to ditch it. I’m a hobbyist when it comes to Linux but I also have work to do. I can’t be constantly ducking around with the OS when I have things to build.
100% agree. Desktop should always be a strong priority for the cpu.
Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit’s already going wrong.
Totally agree, I’ve been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can’t cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn’t lose data.
This is a fundamental issue in Linux, it needs a scheduler from this century.
You should look into IPMI console access, that’s usually the real ‘only way out of this’
SSH has a lot of complexity but it’s still the happy path with a lot of dependencies that can get in your way- is it waiting to do a reverse dns lookup on your IP? Trying to read files like your auth key from a saturated or failing disk? syncing logs?
With that said i am surprised people are having responsiveness issues under full load, are you sure you weren’t running out of memory and relying heavily on swapping?
What do you even mean as serious contender? I’ve been using Linux for almost 15 years without an issue on CPU, and I’ve used it almost only on very modest machines. I feel we’re not getting your whole story here.
On the other hand whenever I had to do something IO intensive on windows it would always crawl in these machines
You are getting the whole story - not sure what it is you think is missing. But I mean a serious desktop contender has to take UX seriously and have things “just work” without any custom configuration or tweaking or hacking around. Currently when I compile on Windows my browser and other programs “just works” while on Linux, the other stuff is choppy and laggy.
I see what you mean but I feel like it’s more on the distro mainters to set niceness and prioritize the UI while under load.
Wasn’t CFS replaced in 6.6 with EEDVF?
I have the 6.6 on my desktop, and I guess the compilations don’t freeze my media anymore, though I have little experience with it as of now, need more testing.
It sounds like the issue is that the Rust compiler uses 100% of your CPU capacity. Is there any command line option for it that throttles the amount of cpu it will use? This doesn’t sound like an issue that you should be tackling at the OS level. Maybe you could wrap the compiler in a docker container and use resource constraints?
Why is that a problem? You’d want a compiler to be as fast as possible.
nice
would be way easier to use than a container…It sounds like the issue is that the Rust compiler uses 100% of your CPU capacity.
No, I definitely want it to use as many resources it can get. I just want the desktop and the windows I interact with to have priority over the compiler, so that the compiler doesn’t steal CPU time from those programs.
No, I definitely want it to use as many resources it can get.
taskset -c 0 nice -n+5 bash -c 'while :; do :; done' & taskset -c 0 nice -n+0 bash -c 'while :; do :; done'
Observe the cpu usage of
nice +5
job: it’s ~1/10 of thenice +0
job. End one of the tasks and the remaining jumps back to 100%.Nice’ing doesn’t limit the max allowed cpu bandwidth of a task; it only matters when there is contention for that bandwidth, like running two tasks on the same CPU thread. To me, this sounds exactly what you want: run at full tilt when there is no contention.
Sure but that’s not what the person I replied to suggested.
I face similar issue when updating steam games although I think that’s related to disk read write
But either way, issues like these gonna need to be address before we finally hit the year of Linux desktop lol
Lots of bad answers here. Obviously the kernel should schedule the UI to be responsive even under high load. That’s doable; just prioritise running those over batch jobs. That’s a perfectly valid demand to have on your system.
This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other, who’s to say it should have more priority than Rust?
I’ve also run into this problem. I never found a solution for this, but I think one of those fancy new schedulers might work, or at least is worth a shot. I’d appreciate hearing about it if it does work for you!
Hopefully in a while there are separate desktop-oriented schedulers for the desktop distros (and ideally also better OOM handlers), but that seems to be a few years away maybe.
In the short term you may have some success in adjusting the priority of Rust with nice, an incomprehensibly named tool to adjust the priority of your processes. High numbers = low priority (the task is “nicer” to the system). You run it like this:
nice -n5 cargo build
.Obviously the kernel should schedule the UI to be responsive even under high load.
Obviously… to you.
This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,
Exactly.
I meant, obviously in the sense that Windows and macOS both apparently already do this and that it’s a desirable property to have, not that it’s technically easy.
Obviously… to you.
No. I’m sorry but if you are logged in with a desktop environment, obviously the UI of that desktop needs to stay responsive at all times, also under heavy load. If you don’t care about such a basic requirement, you could run the system without a desktop or you could tweak it yourself. But the default should be that a desktop is prioritized and input from users is responded to as quickly as possible.
This whole “Linux shouldn’t assume anything”-attitude is not helpful. It harms Linux’s potential as a replacement for Windows and macOS and also just harms its UX. Linux cannot ever truly replace Windows and macOS if it doesn’t start thinking about these basic UX guarantees, like a responsive desktop.
This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,
Exactly.
You say that like it’s a good thing; it is not. The desktop is not a program like any other, it is much more important that the desktop keeps being responsive than most other programs in the general case. Of course, you should have the ability to customize that but for the default and the general case, desktop responsiveness needs to be prioritized.
Even for a server, the UI should always be priority. If you’re not using the desktop/UI, what’s the harm?
When you do need to remote into a box, it’s often when shit’s already sideways, and having an unresponsive UI (or even a sluggish shell) gets old.
A person interacting with a system needs priority.
Yep, CPU scheduler is the correct answer. Id recommend reading this arch wiki on it. https://wiki.archlinux.org/title/improving_performance
My work windoz machine clogged up quite much recompiling large projects (GB s of C/C++ code), so I set it to use 19/20 “cores”. Worked okayish but was not some snappy experience IMO (64GB RAM & SSD).
What desktop?
Wooden IKEA one.
nice +5 cargo build
nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.
This way other programs will get CPU time before cargo/rustc.
So the better approach would be to spawn all desktop and base GUI things with
nice -18
or something?No. This will wreak havoc. At most at -1 but I’d advise against that. Just spawn the lesser-prioritised programs with a positive value.
Could you elaborate?
Critical operating system tasks run at -19. If they don’t get priority it will create all kinds of problems. Audio often runs below 0 as well, at perhaps -2, so music doesn’t stutter under load. Stuff like that.
Ok, nice. Do you know what other undefined processes are spawned with?
Default is 0. Also, processes inherit the priority of their parent.
This is another reason why starting the desktop environment as a whole with a different prio won’t work: the compiler is started as a child of the editor or shell which is a child of the DE so it will also have the changed prio.
Damn… thanks thats complicated
It’s more of a workaround than a solution. I don’t want to have to do this for every intensive program I run. The desktop should just be responsive without any configuration.
Yes, this is a bad solution. No program should have that privilege, it needs to be an allowlist and not a blocklist.
You could give your compiler a lower priority instead of upping everything else.
I’d still need to lower the priority of my C++ compiler or whatever else intensive stuff I’d be running. I would like a general solution, not a patch just for running my Rust compiler.
How do you expect the system to know what program is important to you and which isn’t?
The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution. This results in a small but significant performance degradation. If you want your system to perform worse overall you can achieve this by setting the default process time slice value very low - don’t come back complaining if your builds suddently take 10-20% longer though.
The correct solution is for you to tell the system what’s important and what is not so it can do what you want properly.
You might like to configure and use the auto nice deamon: https://and.sourceforge.net/
How do you expect the system to know what program is important to you and which isn’t?
Hmm
The windows solution is to switch tasks very often and to do a lot of accounting to ensure fair distribution.
Sounds like you have a good idea already!
This hasn’t been my experience when no swapping is involved (not a concern for me anymore with 32GiB physical RAM with 28GiB zram).
And I’ve been Rusting since v1.0, and Linuxing for even longer.
And my setup is boring (and stable), using Arch’s LTS kernel which is built with
CONFIG_HZ=300
. Long gone are the days of runninglinux-ck
.Although I do use craneleft backend now day to day, so compiles don’t take too long anyway.
P.S. Since it wasn’t mentioned already, look up
cgroups
.Back when I had a humble laptop (pre-Rust), using nice and co. didn’t help much. Custom schedulers come with their own stability and worst-case-scenario baggage.
cgroups
should give you supported and well-tested tunable kernel-level resource usage control.