When we stumbled about the hello_video code example that is delivered with the Raspberry image we decided to use the code as a testbed for a prototype to do media decoding. The code uses the OpenMAX, Open Media Acceleration, interface to provide the actual video routines for the decoding. Without an additional license for MPEG-2, only H.264 decoding is possible.
Since most of the DVB data is still in MPEG-2, we focus on the decoding of this type of data. However, since OpenMAX provides an abstract interface, the adjustments for H.264 decoding are straightforward and can be easily done.
Because the sample video file that is provided in the source example is H.264, the first step is to create some media data in MPEG-2 format. That can easily be done with gnutv, for instance, gnutv -timeout 10 -out file test.mpeg $CHANNEL. The raw output of DVB is MPEG-TS, a container format of MPEG that is usually used in broadcasting systems. The data container contains at least one audio and one video stream and because the OpenMAX interface can only work with raw data of a dedicated type, the next step is to demultiplex the data into single streams. This step involves actually some decoding to convert the MPEG-TS data into MPEG-ES which then can be used directly by the OpenMAX video decoding interface.
Sample code for the described procedure is available in some libraries which are part of most Linux distributions. But because we wanted to minimize dependencies and provide a stand-alone prototype, we decided to use GPLed code from the package dvbstream.
For those, who just want to test it without typing much code, the output data can be also created with 'mplayer -dumpvideo -dumpfile test.mpeg test.ts'. The file test.mpeg is in MPEG-ES format which can be directly passed to the hello_video.bin program. However, a small modification to the video.c is required to change the decoder from H.264 to MPEG-2:
video.c:125 -- format.eCompressionFormat = OMX_VIDEO_CodingAVC;
video.c:125 ++ format.eCompressionFormat = OMX_VIDEO_CodingMPEG2;
If everything worked, the screen should be filled with some moving pictures.
After we gathered all the information and code we needed, we decided to implement a proof-of-concept prototype that adds omx as an output to gnutv. In short, instead of sending the output to a file or socket, the media output will be directly send to the video decoder which means the data is directly shown on the screen. As noted before, we first need to decode the MPEG-TS data into MPEG-ES and then we can use the OpenMAX routines.
After we finished to cleanup the code and performed more tests, we need to work on the audio integration. However, with the insights we have gained, a small, lightweight TV solution for the Raspberry is one step closer.