but afaik it's partly about isolating memory content from access of other software so i.e. one website shouldn't be able to read your protected account data from another one
not really, well kinda... those vulnerabilities are dangerous for multi-user scenario, something like: lets say there's some machine with root user and some unprivileged users (no sudo/root access), so by using the vulnerabilities the unprivileged users can access content in the memory that does not belongs to them and technically they can steal sensitive content from root, also in most cases they can get root access. Now just think about those guys like for example AWS EC2 or Google GCP. They provide a set of virtual machines (VMs) and containers (maybe better term is paravirtualization here) that are running on a single machine (hypervisor). So attacker can use these vulnerabilities to steal data from neighbors or even to get access to neighbors environment. Now back to LibreELEC topic, we have just a single user aka root, so potentially if somebody will steal your credentials to ssh under root to LibreELEC - this person basically already will be full privileged user, he don't need to apply these vulnerabilities since full access is already granted.
Anyway those patches are included in bios updates, kernel firmware and finally in in the kernel itself. If you think it's worth it then feel free to disable them, I'll stick to the LE upstream kernel as far as possible.
So let's make it clear, I'm talking about kernel space.
If you pay close enough attention you find a switch. There's a switch which disable these patches, so why do you need it? Yeap, because of performance issues. At this point I don't understand LE upstream committers. There's no point to use these patches at LibreELEC, it slows down the system. I have a feeling that they just too busy with other things or just not fully aware concerning this situation, and these patches are enabled by default, for security reasons,... which is not the case for LibreELEC as mentioned above.
And now comes the question! What the level of regression.... for let's say Kodi and related libs? This is highly application/code dependent. For MySQL myisam the level of regression is 40% (just en example), there're apps where regression is just about 5% .
Some more benchmarking can be found here .
This is just fyi, to bring some clarity into this, I work as a system engineer and have some knowledge
For sure I will disable this for myself, tho some people might ask a question like: why my 4k blu-ray/mkv(whatever) movie of jellyfish has some frameloss, it was not the case few kernels ago,..... or maybe not
Btw. a lot of lr cores like mame2010, scummvm, yabause etc. already use -O3 as GCC optimization so I hardly doubt you'll see any crucial performance improvements.
I feel like now everything loads much faster, but I'm pretty sure it's a placebo effect .
if you have to tweak GCC optimize options to make things run properly you better think about upgrading your hardware
I have skylake arch, i5-7260U . It's more like for fun, to activate all force of the cpu
I remember we did some tests at work and we found some applications perform much better on non-standart compile flags, tho an environment quite different.