





















































Learn how to develop Multimedia applications using Python with this practical step-by-step guide
We will use Python bindings of GStreamer multimedia framework to process video data. See Python Multimedia: Working with Audios for the installation instructions to install GStreamer and other dependencies.
For video processing, we will be using several GStreamer plugins not introduced earlier. Make sure that these plugins are available in your GStreamer installation by running the gst-inspect-0.10 command from the console (gst-inspect-0.10.exe for Windows XP users). Otherwise, you will need to install these plugins or use an alternative if available.
Following is a list of additional plugins we will use in this article:
Earlier, we saw how to play an audio. Like audio, there are different ways in which a video can be streamed. The simplest of these methods is to use the playbin plugin. Another method is to go by the basics, where we create a conventional pipeline and create and link the required pipeline elements. If we only want to play the 'video' track of a video file, then the latter technique is very similar to the one illustrated for audio playback. However, almost always, one would like to hear the audio track for the video being streamed. There is additional work involved to accomplish this. The following diagram is a representative GStreamer pipeline that shows how the data flows in case of a video playback.
In this illustration, the decodebin uses an appropriate decoder to decode the media data from the source element. Depending on the type of data (audio or video), it is then further streamed to the audio or video processing elements through the queue elements. The two queue elements, queue1 and queue2, act as media data buffer for audio and video data respectively. When the queue elements are added and linked in the pipeline, the thread creation within the pipeline is handled internally by the GStreamer.
Let's write a simple video player utility. Here we will not use the playbin plugin. The use of playbin will be illustrated in a later sub-section. We will develop this utility by constructing a GStreamer pipeline. The key here is to use the queue as a data buffer. The audio and video data needs to be directed so that this 'flows' through audio or video processing sections of the pipeline respectively.
import time
import thread
import gobject
import pygst
pygst.require("0.10")
import gst
import os
class VideoPlayer:
def __init__(self):
pass
def constructPipeline(self):
pass
def connectSignals(self):
pass
def decodebin_pad_added(self, decodebin, pad):
pass
def play(self):
pass
def message_handler(self, bus, message):
pass
# Run the program
player = VideoPlayer()
thread.start_new_thread(player.play, ())
gobject.threads_init()
evt_loop = gobject.MainLoop()
evt_loop.run()
As you can see, the overall structure of the code and the main program execution code remains the same as in the audio processing examples. The thread module is used to create a new thread for playing the video. The method VideoPlayer.play is sent on this thread. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. The main event loop for executing this program is created using gobject and this loop is started by the call evt_loop.run().
Instead of using thread module you can make use of threading module as well. The code to use it will be something like:
You will need to replace the line thread.start_new_thread(player.play, ()) in earlier code snippet with line 2 illustrated in the code snippet within this note. Try it yourself!
1 def constructPipeline(self):
2 # Create the pipeline instance
3 self.player = gst.Pipeline()
4
5 # Define pipeline elements
6 self.filesrc = gst.element_factory_make("filesrc")
7 self.filesrc.set_property("location",
8 self.inFileLocation)
9 self.decodebin = gst.element_factory_make("decodebin")
10
11 # audioconvert for audio processing pipeline
12 self.audioconvert = gst.element_factory_make(
13 "audioconvert")
14 # Autoconvert element for video processing
15 self.autoconvert = gst.element_factory_make(
16 "autoconvert")
17 self.audiosink = gst.element_factory_make(
18 "autoaudiosink")
19
20 self.videosink = gst.element_factory_make(
21 "autovideosink")
22
23 # As a precaution add videio capability filter
24 # in the video processing pipeline.
25 videocap = gst.Caps("video/x-raw-yuv")
26 self.filter = gst.element_factory_make("capsfilter")
27 self.filter.set_property("caps", videocap)
28 # Converts the video from one colorspace to another
29 self.colorSpace = gst.element_factory_make(
30 "ffmpegcolorspace")
31
32 self.videoQueue = gst.element_factory_make("queue")
33 self.audioQueue = gst.element_factory_make("queue")
34
35 # Add elements to the pipeline
36 self.player.add(self.filesrc,
37 self.decodebin,
38 self.autoconvert,
39 self.audioconvert,
40 self.videoQueue,
41 self.audioQueue,
42 self.filter,
43 self.colorSpace,
44 self.audiosink,
45 self.videosink)
46
47 # Link elements in the pipeline.
48 gst.element_link_many(self.filesrc, self.decodebin)
49
50 gst.element_link_many(self.videoQueue, self.autoconvert,
51 self.filter, self.colorSpace,
52 self.videosink)
53
54 gst.element_link_many(self.audioQueue,self.audioconvert,
55 self.audiosink)
The video can be streamed even without using the combination of capsfilter and the ffmpegcolorspace. However, the video may appear distorted. So it is recommended to use capsfilter and ffmpegcolorspace converter. Try linking the autoconvert element directly to the autovideosink to see if it makes any difference.
1 def decodebin_pad_added(self, decodebin, pad):
2 compatible_pad = None
3 caps = pad.get_caps()
4 name = caps[0].get_name()
5 print "n cap name is =%s"%name
6 if name[:5] == 'video':
7 compatible_pad = (
8 self.videoQueue.get_compatible_pad(pad, caps) )
9 elif name[:5] == 'audio':
10 compatible_pad = (
11 self.audioQueue.get_compatible_pad(pad, caps) )
12
13 if compatible_pad:
14 pad.link(compatible_pad)
$python PlayingVideo.py
This should open a GUI window where the video will be streamed. The audio output will be synchronized with the playing video.
We created a command-line video player utility. We learned how to create a GStreamer pipeline that can play synchronized audio and video streams. It explained how the queue element can be used to process the audio and video data in a pipeline. In this example, the use of GStreamer plugins such as capsfilter and ffmpegcolorspace was illustrated. The knowledge gained in this section will be applied in the upcoming sections in this article.
The goal of the previous section was to introduce you to the fundamental method of processing input video streams. We will use that method one way or another in the future discussions. If just video playback is all that you want, then the simplest way to accomplish this is by means of playbin plugin. The video can be played just by replacing the VideoPlayer.constructPipeline method in file PlayingVideo.py with the following code. Here, self.player is a playbin element. The uri property of playbin is set as the input video file path.
def constructPipeline(self):
self.player = gst.element_factory_make("playbin")
self.player.set_property("uri",
"file:///" + self.inFileLocation)