I finally got my hands on a rasberry pi. So of course the first thing I did was

attempt to play video. For this I turned to gstreamer and spent about 2-3

evening trying to get the omx gstreamer elements to work. Though I seemed to

keep failing for various reasons. I managed to reach the stage of getting it to

decode and play a single h264 frame. It would then hang and never play any

further video.

So I went in search of some other examples. I found a small example app from

broadcom which was actually located on the debian distribution that shipped with

the pi. This was the "hello video" sample app which seemed to play the h264 file

that came with it fine.

Running the sample app seemed to simple.

cd /opt/vc/src/hello_pi/hello_video

./hello_video.bin test.h264

This gets me playing. When looking into the source of the application I then

notice that it was not going anything clever with the h264 eg seeks. It was in

fact playing the h264 file as a stream. Therefore decided the following would

also work.

cat test.h264 | ./hello_video.bin /dev/stdin

Thats great. That would mean that it would also work with nc (netcat) and the video can be sent from another machine.

On Pi

nc -l -p 10001 | ./hello_video.bin /dev/stdin

On other linux machine

gst-launch-0.10 videotestsrc ! \ video/x-raw-yuv, width=720, height=402, framerate=30/1 ! \ ffmpegcolorspace ! x264enc byte-stream=true ! \ tcpclientsink host=<ip address of pi> port=10001

That managed to put up the nice gstreamer videotest source on the pi. So then I started to look into playing some real video which also involved audio. So this also turns the pi into an instant audio server.

gst-launch -v tcpserversrc host=0.0.0.0 port=10002 protocol=1 ! \ audioconvert ! audioresample ! queue ! autoaudiosink

So by simply using the pi as a video and audio server you can play any video by using the source machine to split the audio / video and send them to the seperate service like this.

On the pi start the video / audio server

nc -l -p 10001 | ./hello_video.bin /dev/stdin

gst-launch -v tcpserversrc host=0.0.0.0 port=10002 protocol=1 ! \

audioconvert ! audioresample ! queue ! autoaudiosink

On your other linux machine run the player / re-encoder and send it to the pi

gst-launch-0.10 filesrc location=Movie.avi ! \ decodebin2 name=dec ! queue ! ffmpegcolorspace ! \ x264enc byte-stream=true psy-tune=1 speed-preset=1 ! \ tcpclientsink host= port=10001 \ dec. ! tcpclientsink host= port=10002 protocol=1

Now for the next problem. Technically the above works. Though the audio quaility is somewhat questionabile. The video quality seems to definatly pass even though in the above gstreamer command I have tunned the x264 encoder to the fastest

possible setting (I have a slow machine)

Something that is definatly intresting is how the pi performs with the above setup. The audio playing seems to struggle mosty because of the driver issues and its gstreamer processes is using around 10% cpu time of the pi. The video player is split into two parts the netcat process uses around 2% cpu time and the h264 sample application also uses around 2% cpu time since it is a hardware decode. So thats a total of 14% of cpu time to play a movie.

Hopefully this opens up some cool options for some people to experement. I can already think of something useful todo with a pi, wall projector, gstreamer and ximagesrc