Customising output bit depth, color format, dither, deband, hw resizer

  • I'm running Libreelec 13 nightly on a PI 4B and a Samsung S95B tv. I would like to customise the HDMI output to my tv, instead of a fixed 12-bit RGB with 10-bit content and 8-bit RGB with regular 8-bit content and the GUI. It looks like Libreelec takes the highest bit-depth reported by the EDID, which is 12-bit for my tv. However, since I play 10-bit files at the most, I would like to output as close to native depth as possible. I don't trust what my tv does with 12-bit, since its panel is 10-bit, it might just truncate the last two bits, so I'd rather output 10-bits instead. I would also like to control the dithering, if applied. When playing 10-bit HDR white ramps and color gradients I see some flickering 'snow' or 'noise' on my display, suggesting temporal dithering is being taking place. I would like to control applying temporal/spatial dithering and favour rounding. Similarly, i would like to control the hardware resizing algorithm and debanding, if possible. Other settings would be custom YCC chroma subsampling and output range (limited/full). Kodi option for the latter doesn't seem to affect the video output.

    I've tried using hdmi_ options in config.txt, such as hdmi_pixel_encoding, hdmi_force_mode, hdmi_deep color, hdmi_drive to force KMS but they doesn't seem to affect the output of cat /sys/kernel/debug/dri/0/state. I've also tried adding parameter to cmdline.txt such as video=HDMI-A-1:1920x1080-10@60eD,color_format=RGB, but it doesn't affect the output, it seems hardwired to 12-bit RGB with my tv.

    Is there a way to pass kernel options or modify filesystem node parameters for the KMS to customise these settings?
    chewitt  HiassofT

    Edited once, last by wyup (April 8, 2026 at 5:40 PM).

  • Is there a way to pass kernel options or modify filesystem node parameters for the KMS to customise these settings?

    I have fuzzy recall RPi5 uses 10-bit internally padded to 12-bit for output in some circumstances as the SoC doesn't natively support the required 10-bit output. There is some upstream work being done to improve output from Kodi which might indirectly influence things, but the direct and short answer to the question above is "No"

  • I have fuzzy recall RPi5 uses 10-bit internally padded to 12-bit for output in some circumstances as the SoC doesn't natively support the required 10-bit output. There is some upstream work being done to improve output from Kodi which might indirectly influence things, but the direct and short answer to the question above is "No"

    I think you are thinking of the yuv422 4kp60 hdmi output.

    It's the hdmi spec that doesn't support 10-bit explicitly - it just uses the 12-bit timings with two padding bits.

    Even if it did, it would look identical (there's no logical reason why sending 10-bits would look different than sending 10-valid-bits in a 12-bit word).

  • What is the output color format, range and bit depth that Libreelec outputs from a 10-bit HDR 24p video with 4kp60 hdmi enabled, 2160p out to a Enhanced HDMI port UHD TV on a RPI4?

    I've tried a 23.976 4:4:4 2160p 10-bit HDR chroma subsampling test clip and my tv does not pass the 1:1 pixel accuracy. In theory, the PI4 supports 4:4:4, 23.976p, 2160p HDMI out if my tv supports it, does it? I have it configured as HDMI-PC in, to allow RGB.

    It seems the PI4 always outputs full-range, no matter what I select in 'Force 16-236 output' GUI option. It only affects the GUI, not the video playback.

    Are there any parameters to choose from drm prime hardware decoder & video processor such as scaling, dithering or debanding?
    Is dithering and debanding being applied from 8-bit to 10-bit?

  • Is Libreelec Kodi a blackbox regarding YUV/RGB , bit depth and DRM hardware video processing taking place? It would be nice to control scaling, dithering or debanding.

  • Is Libreelec Kodi a blackbox regarding YUV/RGB , bit depth and DRM hardware video processing taking place? It would be nice to control scaling, dithering or debanding.

    It's all open source, so feel free to control it how you like.

  • I'd like to discover the API from LibreELEC code that calls the kernel DRM/KMS/V4L2 for hardware accelerated functions like scaling, decoding.. and also where in the linux kernel code for the PI that DRM/KMS/V4L2 is found and whether it is actively maintained.

  • The kernel UAPI's for DRM/KMS/V4L2 are maintained and regularly evolve. If they were not they would be deprecated as dead code and removed from the kernel. There is documentation in the kernel source and since this is a fundamental and core part of the Linux kernel, there is extensive prior-art in kernel code.

    Kodi uses mesa for 2D capabilities GLES/GL and opening the EGL context that video/gui are rendered into. Kodi is largely a big fancy wrapper around libavcodec (FFMpeg) which for an RPi4/5 uses the uses the v4l2_request (stateless) UAPI for HEVC and on RPi4 (but not RPi5) the v4l2_m2m (stateful) UAPI for H264 support. FFMpeg also supports fallback to software decoding, e.g. how H264/VP9/AV1 etc. are handled on RPi5. If implementing changes you'd also need to consider VAAPI too, and in the near future NVDEC.

    I'm not sure what thoughts you're having, but IMHO anyone who needs to ask where documentation is and how it's done will not be successful in coming up with meaningful changes themselves. Even working with Claude Opus and similar AI tools requires a baseline level of domain knowledge to generate prompts that achieve anything.

  • Thanks. It's mostly for reference and follow-up to source code, I don't intend to make any changes. I'd just like to know where in the libreelec repository are these UAPI wrappers for the PI4 branch when I look for commits. I guess you port Kodi updates onto your Libreelec sources for the available architectures. It is a work that I appreciate.

    I am curious as to what improvements come up to the Linux kernel regarding v4l2 UAPIS for video decoders. I.e: what support they have for HEVC/H264 levels and profiles, and whether this v4l2 is architecture independent. Being hardware acceleration, how does v4l2 leverage different SoCs such as Pi4, Pi5, Amlogic, that have different GPUs...

  • It's all documented in the kernel source, both as technical documentation on the UAPI's, and in the resulting codec driver code for each SoC/silicon specific implementation. I'm going to stop at this point because vague questions require long and time-consuming answers. If you're truly curious, it's all open-source, go read or ask a decent AI tool to summarise for you.