





















































This article is by Andrii Sergiienko, the author of the book WebRTC Cookbook.
WebRTC is a relatively new and revolutionary technology that opens new horizons in the area of interactive applications and services. Most of the popular web browsers support it natively (such as Chrome and Firefox) or via extensions (such as Safari). Mobile platforms such as Android and iOS allow you to develop native WebRTC applications.
In this article, we will cover the following recipes:
(For more resources related to this topic, see here.)
In this recipe, we will create a simple application that supports a multiuser videoconference. We will do it using WebRTCO—an open source JavaScript framework for developing WebRTC applications.
For this recipe, you should have a web server installed and configured. The application we will create can work while running on the local filesystem, but it is more convenient to use it via the web server.
To create the application, we will use the signaling server located on the framework's homepage. The framework is open source, so you can download the signaling server from GitHub and install it locally on your machine. GitHub's page for the project can be found at https://github.com/Oslikas/WebRTCO.
The following recipe is built on the framework's infrastructure. We will use the framework's signaling server. What we need to do is include the framework's code and do some initialization procedure:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8">
<style type="text/css"> video { width: 384px; height: 288px; border: 1px solid black; text-align: center; } .container { width: 780px; margin: 0 auto; } </style>
<script type="text/javascript" src =
"https://cdn.oslikas.com/js/WebRTCO-1.0.0-beta-min.js"
charset="utf-8"></script>
</head>
<body onload="onLoad();">
<div class="container"> <video id="localVideo"></video> </div>
<div class="container" id="remoteVideos"></div> <div class="container">
<div id="chat_area" style="width:100%; height:250px;
overflow: auto; margin:0 auto 0 auto; border:1px solid
rgb(200,200,200); background: rgb(250,250,250);"></div>
</div>
<div class="container" id="div_chat_input">
<input type="text" class="search-query"
placeholder="chat here" name="msgline" id="chat_input">
<input type="submit" class="btn" id="chat_submit_btn"
onclick="sendChatTxt();"/>
</div>
<script type="text/javascript"> var videoCount = 0; var webrtco = null; var parent = document.getElementById('remoteVideos'); var chatArea = document.getElementById("chat_area"); var chatColorLocal = "#468847"; var chatColorRemote = "#3a87ad";
function getRemoteVideo(remPid) { var video = document.createElement('video'); var id = 'remoteVideo_' + remPid; video.setAttribute('id',id); parent.appendChild(video); return video; }
function onLoad() { var divChatInput = document.getElementById("div_chat_input"); var divChatInputWidth = divChatInput.offsetWidth; var chatSubmitButton = document.getElementById("chat_submit_btn"); var chatSubmitButtonWidth = chatSubmitButton.offsetWidth; var chatInput = document.getElementById("chat_input"); var chatInputWidth = divChatInputWidth - chatSubmitButtonWidth - 40; chatInput.setAttribute("style","width:" + chatInputWidth + "px"); chatInput.style.width = chatInputWidth + 'px'; var lv = document.getElementById("localVideo");
webrtco = new WebRTCO('wss://www.webrtcexample.com/signalling',
lv, OnRoomReceived, onChatMsgReceived, getRemoteVideo, OnBye);
};
Here, the first parameter of the function is the URL of the signaling server. In this example, we used the signaling server provided by the framework. However, you can install your own signaling server and use an appropriate URL. The second parameter is the local video object ID. Then, we will supply functions to process messages of received room, received message, and received remote video stream. The last parameter is the function that will be called when some of the remote peers have been disconnected.
function OnBye(pid) { var video = document.getElementById("remoteVideo_" + pid); if (null !== video) video.remove(); };
function OnRoomReceived(room) {
addChatTxt("Now, if somebody wants to join you,
should use this link: <a
href=""+window.location.href+"?
room="+room+"">"+window.location.href+"?
room="+room+"</a>",chatColorRemote);
};
function addChatTxt(msg, msgColor) { var txt = "<font color=" + msgColor + ">" + getTime() + msg + "</font><br/>"; chatArea.innerHTML = chatArea.innerHTML + txt; chatArea.scrollTop = chatArea.scrollHeight; };
function onChatMsgReceived(msg) { addChatTxt(msg, chatColorRemote); };
function sendChatTxt() { var msgline = document.getElementById("chat_input"); var msg = msgline.value; addChatTxt(msg, chatColorLocal); msgline.value = ''; webrtco.API_sendPutChatMsg(msg); };
function getTime() { var d = new Date(); var c_h = d.getHours(); var c_m = d.getMinutes(); var c_s = d.getSeconds(); if (c_h < 10) { c_h = "0" + c_h; } if (c_m < 10) { c_m = "0" + c_m; } if (c_s < 10) { c_s = "0" + c_s; } return c_h + ":" + c_m + ":" + c_s + ": "; };
Element.prototype.remove = function() { this.parentElement.removeChild(this); } NodeList.prototype.remove = HTMLCollection.prototype.remove = function() { for(var i = 0, len = this.length; i < len; i++) { if(this[i] && this[i].parentElement) { this[i].parentElement.removeChild(this[i]); } } } </script> </body> </html>
Now, save the file and put it on the web server, where it could be accessible from web browser.
Open a web browser and navigate to the place where the file is located on the web server. You will see an image from the web camera and a chat area beneath it. At this stage, the application has created the WebRTCO object and initiated the signaling connection. If everything is good, you will see an URL in the chat area. Open this URL in a new browser window or on another machine—the framework will create a new video object for every new peer and will add it to the web page.
The number of peers is not limited by the application. In the following screenshot, I have used three peers: two web browser windows on the same machine and a notebook as the third peer:
Sometimes, it can be useful to be able to take screenshots from a video during videoconferencing. In this recipe, we will implement such a feature.
No specific preparation is necessary for this recipe. You can take any basic WebRTC videoconferencing application. We will add some code to the HTML and JavaScript parts of the application.
Follow these steps:
<img id="localScreenshot" src=""> <canvas style="display:none;" id="localCanvas"></canvas>
<button onclick="btn_screenshot()" id="btn_screenshot">Make a screenshot</button>
function btn_screenshot() { var v = document.getElementById("localVideo"); var s = document.getElementById("localScreenshot"); var c = document.getElementById("localCanvas"); var ctx = c.getContext("2d");
ctx.drawImage(v,0,0);
s.src = c.toDataURL('image/png'); }
We use the canvas object to take a frame of the video object. Then, we will convert the canvas' data to DataURL and assign this value to the src parameter of the image object. After that, an image object is referred to the video frame, which is stored in the canvas.
Here, you will learn how to build a native demo WebRTC application for Android. Unfortunately, the supplied demo application from Google doesn't contain any IDE-specific project files, so you will have to deal with console scripts and commands during all the building process.
We will need to check whether we have all the necessary libraries and packages installed on the work machine. For this recipe, I used a Linux box—Ubuntu 14.04.1 x64. So all the commands that might be specific for OS will be relevant to Ubuntu. Nevertheless, using Linux is not mandatory and you can take Windows or Mac OS X.
If you're using Linux, it should be 64-bit based. Otherwise, you most likely won't be able to compile Android code.
First of all, you need to install the necessary system packages:
sudo apt-get install git git-svn subversion g++ pkg-config gtk+-2.0
libnss3-dev libudev-dev ant gcc-multilib lib32z1 lib32stdc++6
By default, Ubuntu is supplied with OpenJDK, but it is highly recommended that you install an Oracle JDK. Otherwise, you can face issues while building WebRTC applications for Android. One another thing that you should keep in mind is that you should probably use Oracle JDK version 1.6—other versions (in particular, 1.7 and 1.8) might not be compatible with the WebRTC code base. This will probably be fixed in the future, but in my case, only Oracle JDK 1.6 was able to build the demo successfully.
In case there is no download link on such an old JDK, you can try another URL: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html.
Oracle will probably ask you to sign in or register first. You will be able to download anything from their archive.
sudo mkdir –p /usr/lib/jvm
cd /usr/lib/jvm && sudo /bin/sh ~/jdk-6u45-linux-x64.bin --noregister
Here, I assume that you downloaded the JDK package into the home directory.
sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.6.0_45/bin/javac 50000 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.6.0_45/bin/java 50000 sudo update-alternatives --config javac sudo update-alternatives --config java cd /usr/lib sudo ln -s /usr/lib/jvm/jdk1.6.0_45 java-6-sun export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45/
java -version
You should see something like Java HotSpot on the screen—it means that the correct JVM is installed.
Perform the following steps to get the WebRTC source code:
mkdir –p ~/dev && cd ~/dev git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git export PATH=`pwd`/depot_tools:"$PATH"
gclient config http://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync
The last command can take a couple of minutes (actually, it depends on your Internet connection speed), as you will be downloading several gigabytes of source code.
To develop Android applications, you should have Android Developer Tools (ADT) installed. This SDK contains Android-specific libraries and tools that are necessary to build and develop native software for Android. Perform the following steps to install ADT:
cd ~/dev unzip ~/adt-bundle-linux-x86_64-20140702.zip
export ANDROID_HOME=`pwd`/adt-bundle-linux-x86_64-20140702/sdk
After you've prepared the environment and installed the necessary system components and packages, you can continue to build the demo application:
cd ~/dev/trunk source ./build/android/envsetup.sh
export GYP_DEFINES="$GYP_DEFINES build_with_libjingle=1 build_
with_chromium=0 libjingle_java=1 OS=android"
gclient runhooks
ninja -C out/Debug -j 5 AppRTCDemo
After the last command, you can find the compiled Android packet with the demo application at ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk.
Follow these steps to run an application on the Android simulator:
$ANDROID_HOME/tools/android sdk
Choose at least Android 4.x—lower versions don't have WebRTC support. In the following screenshot, I've chosen Android SDK 4.4 and 4.2:
cd $ANDROID_HOME/tools ./android avd &
The last command executes the Android SDK tool to create and maintain virtual devices. Create a new virtual device using this tool. You can see an example in the following screenshot:
./emulator –avd emu1 &
This can take a couple of seconds (or even minutes), after that you should see a typical Android device home screen, like in the following screenshot:
cd $ANDROID_HOME/platform-tools ./adb devices
You should see something like the following:
List of devices attached emulator-5554 device
This means that your just created virtual device is OK and running; so we can use it to test our demo application.
./adb install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk
You should see something like the following:
636 KB/s (2507985 bytes in 3.848s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success
This means that the application is transferred to the virtual device and is ready to be started.
While trying to launch the application, you might see an error message with a Java runtime exception referring to GLSurfaceView. In this case, you probably need to switch to the Use Host GPU option while creating the virtual device with Android Virtual Device (AVD) tool.
Sometimes if you're using an Android simulator with a virtual device on the ARM architecture, you can be faced with an issue when the application says No config chosen, throws an exception, and exits.
This is a known defect in the Android WebRTC code and its status can be tracked at https://code.google.com/p/android/issues/detail?id=43209.
The following steps can help you fix this bug in the original demo application:
vsv = new AppRTCGLView(this, displaySize);
vsv.setEGLConfigChooser(8,8,8,8,16,16);
You will need to recompile the application:
cd ~/dev/trunk ninja -C out/Debug AppRTCDemo
For deploying applications on an Android device, you don't need to have any developer certificates (like in the case of iOS devices). So if you have an Android physical device, it probably would be easier to debug and run the demo application on the device rather than on the simulator.
cd $ANDROID_HOME/platform-tools ./adb devices
If device is connected and the machine sees it, you should see the device's name in the result print of the preceding command:
List of devices attached QO4721C35410 device
cd $ANDROID_HOME/platform-tools ./adb -d install ~/dev/trunk/out/Debug/AppRTCDemo-debug.apk
You will get the following output:
3016 KB/s (2508031 bytes in 0.812s) pkg: /data/local/tmp/AppRTCDemo-debug.apk Success
After that you should see the AppRTC demo application's icon on the device:
After you have started the application, you should see a prompt to enter a room number. At this stage, go to http://apprtc.webrtc.org in your web browser on another machine; you will see an image from your camera. Copy the room number from the URL string and enter it in the demo application on the Android device. Your Android device and another machine will try to establish a peer-to-peer connection, and might take some time. In the following screenshot, you can see the image on the desktop after the connection with Android smartphone has been established:
Here, the big image represents what is translated from the frontal camera of the Android smartphone; the small image depicts the image from the notebook's web camera. So both the devices have established direct connection and translate audio and video to each other.
The following screenshot represents what was seen on the Android device:
The original demo doesn't contain any ready-to-use IDE project files; so you have to deal with console commands and scripts during all the development process. You can make your life a bit easier if you use some third-party tools that simplify the building process. Such tools can be found at http://tech.pristine.io/build-android-apprtc.
In this article, we have learned to create a multiuser conference using WebRTCO, take a screenshot using WebRTC, and compile and run a demo for Android.
Further resources on this subject: