So the SoC has no hardware deinterlacing filter? Bummer!
Getting out of the hardware chain is probably crap...
Is handing over interlaced content to be deinterlaced downstream just as crappy to achieve?
I think it's more complicated than that - which is why it's non-trivial to support it.
AIUI the Pi series are not quite typical ARM SoCs. Effectively they were designed initially as a GPU that happened to have an ARM core on it too. (My understanding is that the Videocore GPU boots first and then boots the ARM CPU - though I may be wrong)
Whatever the complexity - there is a way of running code on the Videocore GPU from the ARM CPU - and one of the routes to doing that is using an API called MMAL. GPU deinterlacing (which some may call 'hardware' deinterlacing - though in reality it's code running on the GPU rather than the CPU - which is how accelerated deinterelacing has improved over the years on the Pi series) still takes the workload away from the CPU (rather than what would be considered 'CPU' or 'software' deinterlacing where the CPU handles the code entirely.
AIUI the issue is that the new video pipeline that Linux and LibreElec are using moving forward, which avoids bespoke workarounds for different GPU/VPU architectures (instead relying on platform developers/maintainers to implement a compliant driver framework?), wouldn't allow them to graft on the old MMAL deinterlacing system?
I don't THINK this means it's the end of accelerated deinterlacing on the Pi - but it will require a Pi dev to implement it somehow?
Or has handling of interlaced content been somewhat abandoned with the new framework (I hope not - it's still used for a large chunk of ATSC and DVB TV, plus DVD and Blu-ray)
*** PEOPLE IN THIS THREAD KNOW A LOT MORE ABOUT THIS THAN ME - PLEASE CORRECT ME! ***