Posts by noggin

    On Pi 5 with 12.0-nightly so far no problems reproducing 1080p 4:2:2 h.264/265 video files or streaming, but Hevc UHD feeds not capable, it was supposed to reproduce via HW, but it switches to SW and the CPU goes to the limit...


    UHD example...

    https://www.mediafire.com/file/i0rv6rukd…+record.ts/file

    Mine are all 1080i25 - not 1080p - so also need to be deinterlaced.


    I've just tested and some similar files are playing for me (nighyl LE12 on Pi5):

    and

    (10-bit, 12-bit, and 444 variants also playing). So I guess there's something different in your files.


    Interesting - those are all progressive - mine are interlaced - as will a lot of 4:2:2 media carried on broadcast uplinks (as the bulk of broadcast outside of North America - where 720p59.94 and 1080p59.94 is more of a thing - is 1080i25 or 1080i29.97 - even if it ends up 720p50 for transmission - like in Scandinavia and Germany for instance)

    Had a quick look before I left - will DM you a clip when I get a chance.

    I'm running LE 12 - the 20240106 nightly on a Pi 5 (6ca79da is the end suffix)

    4:2:2 MPEG2 content triggers a restart from the LE splash screen

    4:2:2 h.264 content also triggers a restart from the LE splash screen

    Is this LE11 or LE12? I did spend a while making sure the software paths for the all the weird formats I would find (which did include 422/444 as well as 10-bit/12-bit) did the right thing. That may be LE12 only.

    If LE12 fails, can you point me at a sample file that crashes and I'll look into it.

    I'll try and check this evening and get back to you. It's 4:2:2 h.264 8-bit or 10-bit that crashed for me (both are in use for broadcast links now)

    Ah interesting, appreciate the info!

    Yep - that's 4:2:2 h.264 (used for broadcast backaul/uplinks) not for feeds to the consumer (4:2:0 is used for that).

    4:2:2 video like that currently causes my LE Pi 5 to reboot - though AIUI the CPU should be fine to decode it in software (as it also should 4:2:2 MPEG2) so this is less likely to be a hardware limitation on the Pi 5, more a software issue that needs some work.

    All these cases that are not FLUSH with hdmi connector will cause problems. In fact the only really good ones seem to be the official plastic cases.

    Repeat after me: the pi is conceptually NOT living room material. Its a educational device for tinkerers and nerds.

    Depends on your living room. I have Pi 4Bs in Flirc cases which have been great living room LibreElec boxes. They just sit there working. Early days so far with the Pi 5 - but it seems to be the same (but with a snappier UI). They sit in the AV unit, out of sight, often powered via PoE, and with RF remote controls (usually a PS3 Bluetooth Media Remote or a Vero RF remote) and just work for me. (I've switched to booting some from a USB 3 external SSD)

    I'd use the 3840 "4K" modes: have a read here: https://wiki.libreelec.tv/configuration/4k-hdr

    Yep - 100%. All consumer '4K' content and displays are 3840x2160 (also known as UHD), not 4096x2160 (technically '4K' but not what people call 4K in the consumer space)

    My issues with Pi 4B/5 and Denon AVR-X2400H were in 3840x2160 modes (I never use the default and annoying 4096x2160 resolution).

    However it's early days but I recently also upgraded to a Denon AVR-X2800H (as I needed HDMI 2.1 support for a 2160p120 PS5 and our new LG TV) and things MAY have improved.

    The best case for pi4 is the argon one V1/V2, on/off power management, and for the remote i use for year rii mini keyboard, far better than a simple remote control

    Beware the Argon cases with integrated HDMI extension - they can cause issues if you are running in a high bandwidth 4K HDMI 2.0 mode like 2160p60 4:2:2 12-bit. I had to stop using them for Pi 4B Kodi LibreElec installs for this reason, and switched to Flirc cases.

    Is there an easy way (e.g. using mediainfo, what is the relevant string to look for) to see which case a video is in?

    Do you believe the first case here can be decoded and displayed correctly, purely by outputting the untouched YCbCr data from decoder and the appropriate metadata? I believe the second case requires per-pixel processing which is likely infeasible at 4k without dedicated hardware or a very high performance gpu.

    It looks as if anything with dvhe.05 in its profile will be IPTPQc2 colour space rather than YCbCr. The dvhe.07 and dvhe.08 are based on Dolby Metadata with YCbCr Rec 2020 PQ HDR10 video (plus an optional enhancement layer for added Dolby-ness). However I think this is what happens aleady in LE? The dvhe.05 stuff plays as purple and green. This may be because it's been ripped or mastered incorrectly.

    The UHD BD ripped and remuxed stuff usually contains something like dvhe.08.06, BL+RPU or dvhe.07.06, BL+EL+RPU and HDR10 compatible in the HDR format field for the HDR10 YCbCr HEVC Rec 2020 PQ base layer + DV BPU metadata. (Profile 7 is designed for a base encode (backwards compatible with other formats) plus accompanying Enhancement Layer (which can deliver greater bit depth AIUI) and RPU Metadata, Profile 8 just for a backwards compatible encode and RPU Metadata (with no DV enhancement data for increased bit depth)

    The stuff that is ICtCp-ish (IPTPQc2 technically) usually contains something like dvhe.05.03, BL+RPU and no reference to HDR10 and no HDR10 metadata as it's not HDR10 backwards compatible, and has an ICtCp PQ encoding (in HEVC usually, though I think AV1 is also an option?) with DV BPUs. (Media Info still reports YUV as the color space - though I don't know if this is an assumption or a flagging thing)

    AIUI libplacebo will decode IPTPQc2 encoded video but I don't know how fully - some discussions about Dolby-specifics such as NLW and MMR.

    I've found my Pi 4 and Pi 5s both confuse my Denon AVR-X2400H + Sony TV 49XF900? combo (haven't tried with my new LG OLED yet) does something similar. When the Pi boots you see the LibreElec splash screen and then get corrupt video (sometimes no signal on the TV, often digital shash).

    I found disconnecting the Pi's HDMI output and reconnecting (Hot plug force?) re-establishes a video display that's robust from then on as frame rates and resolutions change (I'm using a Pi Foundation Micro HDMI to HDMI short cable and then an Amazon HDMI 2.0b Premium Certified cable to the AVR, so break the connection at the in-line HDMI junction between the two cables)

    I think I've seen something similar with this combo when I change screen resolution and/or frame rate settings in Raspberry Pi OS - so suspect it's something to do with underlying HDMI handling that the Pi and Denon don't agree on.

    DV and HDR10+ dynamic metadata makes tonemapping content that is potentially outside your display's capabilities better (as HDR10 does to a lesser degree). However some manufacturers ignore the metadata (for HDR10 at least - treating it as PQ10) and use their own algorithms instead of using metadata information. (*)

    Ironically DV and HDR10/10+ are based on PQ and a direct 1:1 link between video level and pixel nit light output - HLG isn't (it doesn't mandate a 1:1 mapping between video level and light output - it's relative instead), and HLG has built-in support for ambient light level compensation. Dolby have had to accept that PQ doesn't work for some viewing scenarios and use DV IQ to do something similar to what HLG can do.

    Personally if DV and HDR10+ can improve my TV's peformance I'm happy to take it - but I'm not going to spend LOTS of money chasing after support for them. That said not having DV did partially influence me not to buy a Samsung Quantum Dot OLED - I've gone for an LG C3 (65").


    (*) Some DV profiles use ICtCp colour space instead of YCbCr/YUV which can in theory improve picture quality as it better maps the high and low bandwidth channels to match the eye/brain perception - and also comes closer to constant luminance (I think) - which has always been an issue with YCbCr/YUV vs RGB gamma processing (I think). Other DV implementations use YCbCr/YUV HDR10/10+ base layers and add an enhancement layer to get nearer 12-bit representation rather than 10-bit - which can also help (You don't need a 12-bit capable display for the benefit as you'll get some benefits with tonemapped 12-bit content mapped to a reduced range 10-bit display I think)

    Libplacebo open source library, which is "provided by ffmpeg as a Vulkan-based video filter, does native support for Dolby Vision HDR, including Profile 5 conversion to HDR/PQ or SDR, reading DV side data, and reshaping. (BL only, currently"). RPi4 has Vulkan 1.2 and GL 3.0 compatibility.

    LE Wiki says:

    Kodi supports Dolby Vision under Android (if the device is licensed for it) but not Linux. Dolby requires manufacturers to license their Intellectual Property and use integration libraries to decode the HDR metadata. Until FFMpeg comes up with a "clean room" reverse engineered open-source implementation, Kodi will not support it.

    I guess this is an open source implementation that LE could use to decode Dolby Vision to HDR10.

    Who could implement it, Kodi or LibreElec?


    NB the CoreElec implementation for DV implemented on DV Licensed platforms requires that CoreElec Linux runs alongside an existing Android (including DV Support) installation and hooks into the DV decoder stuff in the Android file system and uses it within CoreElec. It's a neat solution that's VERY bespoke to specific installations - and only works on licensed systems (though is potentially pushing at the edges of legality - though I'm not a lawyer)

    Well, at least decode DV to HDR10 with libplacebo open source library, provided as a vulkan-based video filter in FFmpeg?

    Is this ICtCp to YCbCr/YUV conversion for native DV HEVC stuff that uses the ICtCp style representation? (Used for the DV profiles commonly used by Netflix, Prime, Apple TV+ etc.) rather than just using the DV RPUs to generate new metadata for HDR10 YCbCr/YUV encodes? (Ignoring FEL - Full Enhancement Layer - DV streams that expand the HDR10 stuff to >10-bit?)

    Beware using 'DolbyVision' as a catch-all. There are lots of different variants of DV...

    Some are based around YCbCr (aka YUV) PQ HDR10 or HLG 10-bit HEVC/h.265 and then Dolby Vision RPU metadata (and in some cases also an expansion layer to get from 10-bit to 12-bit depth). These can usually be replayed with no Dolby Vision licensing requirement for the HDR10/HLG stuff. Examples of this type of DV sources are Dolby Vision UHD Blu-rays and DV video shot on iPhones.

    Other DV stuff is encoded purely in a DolbyVision video format using ICtCp representation and PQ instead of YCbCr/YUV - though still encoded in 10-bit HEVC/h.265. When these are replayed by non-Dolby Vision licensed devices they replay with magenta/green colours instead of normal colours, because they are interpreted as YCbCr when they aren't. This format is widely used for streaming platforms - where the streaming player can request a specific encode that it can play (rather than a single encode needing to be playable by multiple devices). The CoreElec DV implementation can play these OK, few other devices can.

    Pi 5 goes to Yamaha RX-V685 goes to a 4K TV.

    Replacing the cable from Pi 5 to the Yamaha fixed the problem. I have no idea if this cable is 2.0 or whatever, but it looks more expensive.

    Could it be that the Pi 5 produces a higher HDMI data rate than the Pi 4? Why is that so with the same video material? DVB-S2, HD. If so, that might be worth mentioning in future upgrade instructions. Or reduce the HDMI data rate to what is really needed for the actual video.

    Different HDMI output resolutions, frame rates and bit depths require different bandwidth HDMI cables. I think both the Pi 4B and Pi 5 can output the higher bandwidth 18Gbs modes in 2160p60 4:2:2 12-bit (there isn't a 4:2:2 10-bit option in HDMI 2.0). Someone please correct me if I'm wrong.

    However it could be that your previous set-up with a Pi 4B didn't try to use 18Gbs modes and your Pi 5 set-up is?

    I am still using the "9.2.8" version on a RPI3B. It is amazing how it can decode 10bit 1080p x265 videos with ease. If i install the new version, even 720p videos suffer. This goes to show that when optimized, software can do amazing things, even on low end hardware. We don't really need all that power.

    To be fair - AIUI it's using GPU compute (i.e. offloading some computation to the GPU) to lighten the load on the CPU - so it's using more processing power than the CPU alone used for pure software decode can provide.

    Agree that DTS-HD is a high-bitrate audio format that requires a more powerful processor than the Raspberry Pi 4. This may result in lags and interruptions in the audio.

    Not sure I follow - the bitrate of DTS-HD Master Audio is no different to Dolby True HD or something like 192kHz 5.1 PCM? The Pi 4B can handle all of them OK in my experience. (It can also decode DTS HD MA to PCM with no major issue). Audio's not hugely processor-hungry (particularly if you aren't decoding it and just moving it from A to B).

    The usual stuttering stuff is caused by work-in-progress buffer code, or a non-complete understanding of some of the specs, as this is all being partially implemented by enthusiasts trying to solve a problem, rather than people being paid to commercialise a product.

    There was a long-standing issue relating to high-bitrate Dolby True HD on multiple platforms (including Intel ISTR) that took a while to get to the bottom of.

    I am only getting Stereo output (as in, front speakers, no center) when setting E-AC3/Atmos to passthrough. It seems the speaker config also matters a lot, eg. E-AC3 only appears on the passthrough list if my speaker config is 2.0 .. but why would LE care about my TV speaker config, when I'm bypassing it with Passthrough? .. make it make sense?

    You don't have enable transcoding to AC3 enabled do you?

    That option only appears if you have 2.0 speaker config selected. The other passthrough options should appear for all speaker configurations.

    It's not an obvious configuration and sadly does catch a few people out. I'm not sure what we could really do to improve though.

    Is the setting actually serving two use cases ?

    1. Displays that can't accept p25 modes and only p50 (though EDID and/or whitelisting may catch this anyway?)

    2. Chosing to deinterlace i25 to p50 rather than p25

    The two don't fully overlap - but currently one setting covers both use cases?

    Personally I'd argue that defaulting to on rather than off would be the better default?