I needed to create a working command line demo application of piping an OpenGL rendering into a video file. This experiment is intended to be just proof of concept and be a very naive implementation.

The main steps are:

OpenGL rendering

Reading the rendering’s output

Writing each output frame into the video

This guide will cover an OpenGL >4.0 application tested on a Mac OS X. It should be portable to Linux/Windows as well.

If you just need to create an OpenGL output only on iOS, the AVFoundation classes will make this process much easier. Additionally, if you need to only use Android (API 18), you can use MediaMuxer to also easily output an OpenGL rendering to a video file. See EncodeAndMuxTest.java for details.

OpenGL Rendering

The focus of this article isn’t on the OpenGL rendering logic, so let’s just go with a basic OpenGL application which renders a sliding window across an image texture.

initShaders(); loadTexture(); setupBuffers(); glutDisplayFunc(&drawScene); glutIdleFunc(&replay); glutMainLoop();

Reading the Rendered Output Frames

glReadPixels

unsigned char *raw_image = (unsigned char*) calloc(width * height * 3, sizeof(char)); glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, raw_image);

Writing the output frames into a video format

CvVideoWriter *writer = 0; int isColor = 1; int fps = 30; int width = image_width; int height = image_height; writer = cvCreateVideoWriter("out.avi", CV_FOURCC('D', 'I', 'V', 'X'), fps, cvSize(width, height), isColor);

CV_FOURCC('')

CV_FOURCC('X', '2', '6', '4')

IplImage* img = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3); img->imageData = (char *)raw_image; cvWriteFrame(writer, img); // add the frame to the file cvReleaseImage(&img);

The basic structure of our app will be:(See complete source code on Github After you draw to the window, we useto read the pixel data from the OpenGL frame buffer into a block of memory. ( Alternatively, you use OpenGL Pixel Buffer Object for more efficiency )Let’s start by setting up the OpenCV Video Writer:You can changetofor x264 output. Make sure you compiled ffmpeg with x264 support and OpenCV with ffmpeg. We then create an OpenCV IplImage from this block of memory. Finally we just use OpenCV’s cvWriteFrame to append the frame to the video output.

You will need to read the framebuffer and write to the video using OpenCV on every frame update.

After you are done writing the frames to video, be sure to release the OpenCV video writer.

cvReleaseVideoWriter(&writer);

The alternatives to using OpenCV for video writing are using the C libraries of ffmpeg or gstreamer or libx264(mp4)/libvpx(webm) directly. These would require the RGB image data be converted to YV12 (or YUV420) color space first.

This Stackoverflow post goes over the details of using the x264 C api http://stackoverflow.com/questions/2940671/how-does-one-encode-a-series-of-images-into-h264-using-the-x264-c-api

Sample Project

I posted an example project on GitHub which demonstrates how this works: https://github.com/tc/opengl-to-video-sample