Posts by noggin

    noggin thanks a lot for testing, this is in line with my suspicion (video driver using the wrong transformation matrix). Not 100% sure why 10/12bit output isn't automatically enabled, could be that we need to explicitly request it.

    Regarding subsequent playback issues: Which testbuild did you use? The one from the first post (from Feb 5) or the one from here RE: RPi4 testbuild with HDR support (Feb 10)? The later build contains some important fixes that may be related.

    If you got that issue with the newer build please post a log (ssh in, run "pastekodi" and post the URL) and ideally a link to a sample file so I can try to reproduce that locally.

    so long,

    Hias

    I was using the one from the first post (5th Feb) in this thread. I'll download the new one you linked to and see what happens.

    Disabling 2160p output modes in the whitelist and playing back a 2160p HDR10 file I get 1080p RGB 8bit output for HDR 10, with an ST 2084 EOTF flagged. I haven't seen >8 bit depth or YCbCr output so far.

    Without doing a bit more digging I can't easily see on my HD Fury OSD whether Rec 2020 or Rec 709 gamut is being signalled - but looking at the output it looks like it's in the wrong gamut as expected. (i.e. Rec 709 primaries rather than Rec 2020 are being flagged or assumed by my display). It's also not instantly easy to tell whether Rec 2020 or Rec 709 YCbCr->RGB matrix co-efficients have been used for the YCbCr to RGB conversion (Rec 2020 has different YCbCr<->RGB conversion matrix values to Rec 709, just as Rec 709 HD and Rec 601 SD (*) differ from each other)

    (WRT to Rec 2020 - I don't think much content uses YcCbCCrc which is the constant luminance-based option - I think all mainstream content, other than DV single-stream, which isn't going to be relevant here, is 4:2:0 YCbCr - output on most consumer players as 4:2:2 12-bit YCbCr)

    Also - only the first 2160p HDR file I play correctly plays. Subsequent files fail to play (I get audio but only see the file list GUI and have to reboot)

    (*) Rec 601 even, technically, has different RGB primaries, and thus gamuts, for the 525/480i and 625/576i variants due to the NTSC and PAL legacies before them... (And Japanese NTSC and North American NTSC have different white points...)

    I'll see what Gamut (Rec 709 vs Rec 2020) the HD Fury reports.

    I'll also see what bit depth I get if I remove 2160p modes from the whitelist.

    I've got test signals with various CLL values so can also check those when I get a chance.

    Is HLG HDR flagging supported too - or is it just PQ HDR currently?

    Just tried the .tar linked in the first post here. I get 2160p23.976 HDR10 HEVC stuff playing back with an HDR PQ EOTF flagged and HD Audio bitstreamed.

    However the video output is only 8-bit not 10-bit according to my HD Fury Vertex, so although the TV is in HDR mode and displaying HDR video content with the correct EOTF - there will potentially be banding as the 10-bit source video is being truncated to 8-bits? (If dithering is used this might be partially masked though)

    Is there supposed to be 10-bit 4:4:4/RGB or 12-bit 4:2:2 output implemented, to allow 10-bit content to be output in 10 or 12-bit - or is this test build just to get correctly flagged EOTFs?

    I notice I'm not offered 2160p50 or 2160p59.94 output options in Kodi and when I play 2160p50/59.94 content rather than output it 1080p50/59.94 - I get no video replay at all. (I understand the Pi can't output 2160p50/59.94/60 with 4:2:0 - so for HDR 10-bit content at p50 and higher 4:2:2 12-bit will be the only option for preserving 10-bit?)

    Yep, fixed it for me, TV was unwatchable with interlacing enabled - juddering and out of sync mess. Rpi4 + TVHeadend. Switched to a totally different system (Xbian) because it was driving me nuts. Didn't make any sense at all that it could play a 160GB 4k video but not 1080i/480i TV. Tried three different PVR backends/frontend over the last few days!

    :thumbup::thumbup::thumbup::thumbup::thumbup:

    Worth remembering that h.264 and interlaced content are handled by totally separate parts of the Pi4B SoC to h.265/hevc. The h.264/interlaced stuff is decoded and deinterlaced by the legacy VPU that remains the same as the Pi 3B+ and before, whereas the hevc/h.265 decoder is a separate unit and handled separately. I believe some builds at the moment are not handling stuff particularly effectively using the old decoder?

    I still find it admirable that folks are persisting with these technically dead Intel platforms when there are fully functional ARM alternatives available, waiting for working HDR on x64 is like waiting for this pandemic to end.

    I guess the benefit of Intel in speed terms is worthwhile - I switched to AMLogic platforms as my main daily driver and miss the snappiness of my Intel boxes. That said - the Apple TV 4K SoC is a beast - it software decodes stuff that the other ARM platforms just choke on (38Mbs 1080i25 4:2:2 h.264 with deinterlacing is do-able on an ATV 4K in software)

    My understanding is that HDMI 1.4 audio has a max spec of 8 channels of 192kHz / 24 bit uncompressed PCM audio - which gives a max audio output bit rate of :

    8 x 192000 x 24 = ~ 36.9Mbs

    I guess any buffering for HDMI audio should be considering that as the top audio bitrate that HDMI can carry?

    My understanding is that 8 x 192 x 24 is also the spec that is required for lossless compressed HD Audio to be carried over HDMI - which is why the Raspberry Pi 3B+ and earlier can't carry HD Audio - as they could only carry 4 x 192 x 24 bit (or 8 x 96 x 24 bit) which doesn't guarantee enough bitrate for the peaks of HD Audio lossless compressed content (which may not compress hugely on very complex content?)

    Dolby True HD and DTS HD MA sound tracks are usually lossless compressed 8 channels at 48k 24 bit (some are 96k and a very small number are 192k) which gives an uncompressed bitrate of around 9Mbs (or 18Mbs if 96kHz is used), however releases like Akira can contain 6 channel 192k 24 bit tracks (and 8 channel is theoretically allowed).

    For very complex audio the lossless compression used by DTS HD MA and Dolby True HD may not deliver huge bandwidth savings - and if you add in Atmos and DTS:x data as well on newer tracks this will also increase the bitrate a bit I guess?

    I think it was a "timing" problem more than anything else. Not long after Slice launched (late) RPi2 came along, and the lack of CM2 card meant the specs looked weak, and then RPi3 arrived to drive nails further into the coffin. CM3 eventually caught up, but that was some time later again and by that point the day-jobs of the "Five" had also multiplied via Pi success and the company they'd formed hadn't sold enough product volume to really be viable. These days the Pi Foundation owns the IP/designs, but while I think having them create/release an "official" HTPC box would be good for their own sales it would also put them into competition with their ecosystem in a negative way.. but I should ask Gordon anyway :)

    That all makes sense - the CM1 vs Pi 2 was very unfortunate timing - and the CM1 was just a little bit underpowered (though with the MPEG2 and VC-1 licences it was still a very capable HD and SD media player).

    ISTR there was an issue with the continued use of the original Slice skin in Kodi - but hadn't realised that the Pi Foundation now owned the IP and designs. I totally understand why the Pi foundation wouldn't want to compete with others in the ecosystem. However they do make the DVB-T2 HAT - and that integrated into a modern Pi 4-based Slice (once the HDR and HD Audio stuff is implemented) would be lovely.

    None of these after market cases really come close to looking as well designed as the Slice though :( - though the Argon cases are some of the nicest looking out there at the moment IMO.

    I think the Slice was very much a passion project from the developers - the fact it didn't evolve and continue to be manufactured suggests it didn't make that much money (and it was very much a premium product).

    Which tv box with Amg soc currently supports 4: 2: 2 H264 or H265 videos either by hardware or software? S922x?

    i tried a rpi4 and it doesn't work with 4: 2: 2 H264 or H265 videos

    Very few boxes support 4:2:2 video as it isn't a consumer format (DVD, DVB TV broadcasts, Blu-ray, UHD Blu-ray, Netflix, Prime etc. are all 4:2:0) so there is no reason to implement it in chipsets aimed at consumer devices.

    4:2:2 h.264/h.265 is only used by broadcasters on contribution circuits - not final-leg distribution - so unless you are building a box for feed hunters (a small market) 4:2:2 isn't really a 'must have' feature.

    None of the AMLogic chipsets that I am aware of support 4:2:2 hardware decode for this reason - so you need a box fast enough to decode it (and if it's interlaced also YADIF or W3DIF 2x deinterlace it) with CPU power.

    The only ARM media player platform that I know that can currently do software decode (and deinterlace) of 1080i25 for MPEG2 and h.264 4:2:2 is the Apple TV 4K (the ARM SoC in that is a beast)

    The other BIG advantage of the Apple TV 4K is that it has hardware acceleration support for 4:4:4 and 4:2:2 h.265/HEVC decode - because Apple use that codec for their Airplay/Sidecar iPad dual display function which compresses desktop video to h.265 for carriage over WiFi or Lightning/USB Type-C connection (to avoid reducing the chroma res to 4:2:0 and it all going smeary when you use your iPad as a second display) The Apple TV also has that functionality. I've played 2160p50 4:2:2 h.265/HEVC on the Apple TV in mrMC with very low CPU (but it was still not perfect because of some A/V sync issues)

    If you want MPEG2 and h.264 4:2:2 with LibreElec then a decent Intel or AMD solution is probably your best bet - I used to use a Haswell i5 NUC to play 4:2:2 stuff and it coped - just - once multithreaded software decode of h.264 was implemented in ffmpeg. However the LibreElec implementation seemed to do a 4:2:2 to 4:2:0 conversion without interlaced aware chroma decode/conversion so you had saturated areas deinterlaced with p25 rather than p50 motion (a known issue in ffmpeg)

    Also if you are running 4K then your audio extractor also needs to be 4K friendly - counterintuitive as this may seem. The reason for this is that HDMI audio is not carried separately to the video, it's actually embedded in the video signal (it's carried in the blanking period of the HDMI video signal where there isn't active video).

    If you are running a higher data rate video signal (4:2:2 2160p60 for instance) then your extractor also needs to support the higher bandwidth.