I have a program that I want to incorporate 4K 30fps video and audio playback into on a Raspberry Pi 4.
Using ffmpeg's low-level libraries, I've reached a point where I'm able to leverage the Pi's hardware decoder for 4K HEVC playback with a testing version of Mesa. Using DRM to draw the video frames to an EGL surface, the player averages 20 fps, even on 1080p videos with hardware decoding.
I've been looking at hello_drmprime on GitHub, and it looks like it uses DRM planes instead of drawing to a surface. Should I rewrite my program to match the output process used in hello_drmprime, or is there something else I'm missing?
In particular, this function is used to refresh the screen each frame. I've noticed that drmModeSetCrtc will block the main loop for longer than a single frame should take:
void gbmSwapBuffers(EGLDisplay *display, EGLSurface *surface) {
eglSwapBuffers(*display, *surface);
struct gbm_bo *bo = gbm_surface_lock_front_buffer(gbmSurface);
uint32_t handle = gbm_bo_get_handle(bo).u32;
uint32_t pitch = gbm_bo_get_stride(bo);
uint32_t fb;
drmModeAddFB(device, mode.hdisplay, mode.vdisplay, 24, 32, pitch, handle, &fb);
drmModeSetCrtc(device, crtc->crtc_id, fb, 0, 0, &connectorId, 1, &mode);
if (previousBo) {
drmModeRmFB(device, previousFb);
gbm_surface_release_buffer(gbmSurface, previousBo);
}
previousBo = bo;
previousFb = fb;
}
Display More
Thank you!