Posts by vaultingsecurity
-
-
-
Sorry, I'm still getting the grips with these rendering concepts. What would I have to do to troubleshoot / resolve this issue?
-
Thank you for confirming. I've gone ahead and written an implementation of my video player that renders directly to a plane, and performance has been great!
However, I've run into another issue. I need to render UI elements on top of the video, and draw a background color at the back where both the UI and video aren't being displayed. I've set up these different layers as separate DRM planes, the foreground plane using GBM and OpenGL to draw the controls, and the background plane just being a dumb buffer. When displaying either of these planes in combination with the video plane, my program will crash with this message:
Code[hevc @ 0x2453da0] Hwaccel V4L2 HEVC stateless V1; devices: /dev/media0,/dev/video19 Failed to alloc 12443648 from dma-heap(fd=25): 12 (Cannot allocate memory) [hevc @ 0x2453da0] v4l2_request_hevc_start_frame: Failed to get dst buffer Failed to alloc 12443648 from dma-heap(fd=25): 12 (Cannot allocate memory) [hevc @ 0x2453da0] v4l2_request_hevc_end_frame: Failed to get dst buffer [hevc @ 0x2453da0] hardware accelerator failed to decode picture
I've edited my /boot/config.txt to try and allocate extra memory to the GPU, but this doesn't seem to help. Here's a snippet of that file:
Code: /boot/config.txtdtoverlay=vc4-kms-v3d,cma=512 max_framebuffers=2 dtoverlay=rpivid-v4l2 gpu_mem=128
I feel like I'm close to getting this working - any pointers would be greatly appreciated!
-
I have a program that I want to incorporate 4K 30fps video and audio playback into on a Raspberry Pi 4.
Using ffmpeg's low-level libraries, I've reached a point where I'm able to leverage the Pi's hardware decoder for 4K HEVC playback with a testing version of Mesa. Using DRM to draw the video frames to an EGL surface, the player averages 20 fps, even on 1080p videos with hardware decoding.
I've been looking at hello_drmprime on GitHub, and it looks like it uses DRM planes instead of drawing to a surface. Should I rewrite my program to match the output process used in hello_drmprime, or is there something else I'm missing?
In particular, this function is used to refresh the screen each frame. I've noticed that drmModeSetCrtc will block the main loop for longer than a single frame should take:
Code
Display Morevoid gbmSwapBuffers(EGLDisplay *display, EGLSurface *surface) { eglSwapBuffers(*display, *surface); struct gbm_bo *bo = gbm_surface_lock_front_buffer(gbmSurface); uint32_t handle = gbm_bo_get_handle(bo).u32; uint32_t pitch = gbm_bo_get_stride(bo); uint32_t fb; drmModeAddFB(device, mode.hdisplay, mode.vdisplay, 24, 32, pitch, handle, &fb); drmModeSetCrtc(device, crtc->crtc_id, fb, 0, 0, &connectorId, 1, &mode); if (previousBo) { drmModeRmFB(device, previousFb); gbm_surface_release_buffer(gbmSurface, previousBo); } previousBo = bo; previousFb = fb; }
Thank you!