Demo

To use this demo you must

Create a free Firebase account on their site and replace our credentials with yours Replace the turn server with your own. You can get a free one, courtesy of Viagenie Open this demo on another computer and press call on either computer

In addition to the CodePen below, you can alternatively clone the repo on GitHub.

For Cordova Developers - If you'd like an awesome plugin to access the native call UI for iOS (CallKit) and Android (ConnectionService), check out the following: https://github.com/WebsiteBeaver/CordovaCall

See the Pen PWmRmj by Daniel Marcus (@dmarcus) on CodePen.

Using Demo

Introduction

In this tutorial, you’ll learn how to build a simple video chat using WebRTC. You can view the demo above to see the video chat in action. Just replace the Firebase credentials in the CodePen, open this web page on another computer, and press call on either computer. I would encourage you to try this before continuing to read this tutorial. After an explanation of how this video chat works, we’ll dive right into the code. I’ll provide a line-by-line explanation of the code.

How This Video Chat Works

In order to get this video chat to work, we’ll use two important technologies. The first important technology is WebRTC (Web Real-Time Communication). Luckily you don’t need to import a library because WebRTC is built into your browser. That’s assuming you are using a browser that supports it. The second important technology is Google Firebase. Firebase is a live database (it has other very useful features aside from this). If you add data to your Firebase database, someone on your website doesn’t need to refresh the page to see the new data. The new data will just appear. Unlike WebRTC, you need to import the Firebase JavaScript library in order to use it. You also need to create a free Firebase account, which allows 100 simultaneous connections. The free tier is good enough for this tutorial. Firebase will allow us to send and receive messages live, which you need to get the video chat to work.

Set Up Firebase

Go to https://firebase.google.com and create a free account. Click "Create New Project" Enter a project name and click "Create Project" Click "Add Firebase to your web app" Copy this code, and replace the credentials in CodePen with your new credentials Click on "Rules" Change the values of .read and .write to true Now anyone can read from and write to your Firebase database

WebRTC Video Chat Procedure

Now that we have Firebase set up, let’s talk a little about how WebRTC can be used to set up a video chat. Say we have two computers, yours and your friend’s. Here is the step-by-step procedure needed to make the video chat work. (Note that I bolded words that sound strange, but are actually just JavaScript objects in JSON. I’ll give you examples of each right after these steps.)

Display a MediaStream video of yourself on your computer Display a MediaStream video of your friend on his computer Create a PeerConnection on your computer Create a PeerConnection on your friend’s computer Create an Offer on your computer Add that Offer to the PeerConnection on your computer Send that Offer to your friend’s computer Add that Offer to the PeerConnection on your friend’s computer Generate ICE Candidates on your computer Send those ICE Candidates to your friend’s computer Add ICE Candidates to the PeerConnection on your friend’s computer Create an Answer on your friend’s computer Add that Answer to the PeerConnection on your friend’s computer Send that Answer to your computer Add that Answer to the PeerConnection on your computer Generate ICE Candidates on your friend’s computer Send those ICE Candidates to your computer Add ICE Candidates to the PeerConnection on your computer Display a MediaStream video of your friend on your computer Display a MediaStream video of yourself on your friend’s computer

Real example of a MediaStream object in JSON

MediaStream { active:true, id:"ARwKgYl2LvuZyw2zWzKWeGzXUEy0KHpJj9xW", onactive:null, onaddtrack:null, oninactive:null, onremovetrack:null }

Go ahead and create your own MediaStream object by opening a blank Chrome tab and opening Developer Tools. Then in the console enter the following:

navigator.mediaDevices.getUserMedia({audio:true, video:true}) .then(stream => console.log(stream));

Real example of a PeerConnection object in JSON

RTCPeerConnection { iceConnectionState:"new", iceGatheringState:"new", localDescription:RTCSessionDescription { sdp:"", type:"" }, onaddstream:null, ondatachannel:null, onicecandidate:null, oniceconnectionstatechange:null, onnegotiationneeded:null, onremovestream:null, onsignalingstatechange:null, remoteDescription:RTCSessionDescription { sdp:"", type:"" }, signalingState:"stable" }

Go ahead and create your own PeerConnection object by going to Developer Tools and entering:

var pc = new webkitRTCPeerConnection({'iceServers':[{'urls':'stun:stun.l.google.com:19302'}]});

Real example of an Offer object in JSON

{ type: "offer", sdp: "v=1↵c=- 5245133456194626701 2 IN IP4 127.0.0.1↵s…3610 label:br19t341-rd8t-94tb-rE8j-p4625gt469y5↵" }

Go back to your Developer Tools and create your own Offer object by entering:

pc.createOffer() .then(offer => console.log(offer) );

Real example of an Answer object in JSON

{ type: "answer", sdp: "v=1↵c=- 8590329309343532049 2 IN IP4 127.0.0.1↵s…3112 label:j7t8s39y-z7i2-5762-au49-re0c4gba479a↵" }

Go back to your Developer Tools and create your own Answer object by entering:

pc.createAnswer() .then(answer => console.log(answer) );

Real example of an ICE Candidate object in JSON

{ candidate:"candidate:5720275078 1 udp 8837102613 9201:398:am9u:14uf:2934:r39a:h753:z43i 38842 typ host generation 3 ufrag uEJl network-id 3 network-cost 82", sdpMLineIndex:2, sdpMid:"audio" }

Open up a new Developer Tools and create your own ICE Candidate objects by entering:

var servers = {'iceServers': [{'urls': 'stun:stun.l.google.com:19302'}]}; var pc = new webkitRTCPeerConnection(servers); pc.onicecandidate = (event => console.log(event.candidate)); navigator.mediaDevices.getUserMedia({audio:true, video:true}) .then(stream => pc.addStream(stream)); pc.createOffer() .then(offer => pc.setLocalDescription(offer) );

You should see several ICE Candidate objects. When I typed this in, it gave me 12 ICE Candidates.

Explanation of CodePen Demo

Now that you know how to create PeerConnection, MediaStream, Offer, Answer, and ICE Candidate objects, you need to send some of those objects to your friend’s computer, and that’s where Firebase comes into play. Let’s analyze the CodePen demo from above.

We’ll start out with the HTML:

<html> <head> <script src="https://www.gstatic.com/firebasejs/3.6.4/firebase.js"></script> <link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"> </head> <body onload="showMyFace()"> <video id="yourVideo" autoplay muted></video> <video id="friendsVideo" autoplay></video> <br /> <button onclick="showFriendsFace()" type="button" class="btn btn-primary btn-lg"><span class="glyphicon glyphicon-facetime-video" aria-hidden="true"></span> Call</button> </body> </html>

You need to load the Firebase JavaScript library. The Bootstrap CSS library is used to make the call button look nice. When you click on the call button, the showFriendsFace() function is called. Once the body loads, showMyFace() is called. I’ll explain those functions when we go over the JavaScript portion of the demo. You’ll notice two video tags are used. One will be a video of you (yourVideo), and the other will be a video of your friend (friendsVideo). The autoplay attribute makes the video play immediately after you set it’s source. The muted attribute is only used on your video because you don’t want to hear yourself talk. Let’s move on to the CSS portion of the demo.

video { background-color: #ddd; border-radius: 7px; margin: 10px 0px 0px 10px; width: 320px; height: 240px; } button { margin: 5px 0px 0px 10px !important; width: 654px; }

This code makes the videos have a gray background with rounded edges. We give each video player a fixed width and height in addition to margins. Same with the button. We fix its width and set its margins.

We’re on the final part of the code, which is the JavaScript. Believe it or not, this part of the code is under 60 lines.

//Create an account on Firebase, and use the credentials they give you in place of the following var config = { apiKey: "AIzaSyCTw5HVSY8nZ7QpRp_gBOUyde_IPU9UfXU", authDomain: "websitebeaver-de9a6.firebaseapp.com", databaseURL: "https://websitebeaver-de9a6.firebaseio.com", storageBucket: "websitebeaver-de9a6.appspot.com", messagingSenderId: "411433309494" }; firebase.initializeApp(config); var database = firebase.database().ref(); var yourVideo = document.getElementById("yourVideo"); var friendsVideo = document.getElementById("friendsVideo"); var yourId = Math.floor(Math.random()*1000000000); var servers = {'iceServers': [{'urls': 'stun:stun.services.mozilla.com'}, {'urls': 'stun:stun.l.google.com:19302'}, {'urls': 'turn:numb.viagenie.ca','credential': 'webrtc','username': 'websitebeaver@mail.com'}]}; var pc = new RTCPeerConnection(servers); pc.onicecandidate = (event => event.candidate?sendMessage(yourId, JSON.stringify({'ice': event.candidate})):console.log("Sent All Ice") ); pc.onaddstream = (event => friendsVideo.srcObject = event.stream); function sendMessage(senderId, data) { var msg = database.push({ sender: senderId, message: data }); msg.remove(); } function readMessage(data) { var msg = JSON.parse(data.val().message); var sender = data.val().sender; if (sender != yourId) { if (msg.ice != undefined) pc.addIceCandidate(new RTCIceCandidate(msg.ice)); else if (msg.sdp.type == "offer") pc.setRemoteDescription(new RTCSessionDescription(msg.sdp)) .then(() => pc.createAnswer()) .then(answer => pc.setLocalDescription(answer)) .then(() => sendMessage(yourId, JSON.stringify({'sdp': pc.localDescription}))); else if (msg.sdp.type == "answer") pc.setRemoteDescription(new RTCSessionDescription(msg.sdp)); } }; database.on('child_added', readMessage); function showMyFace() { navigator.mediaDevices.getUserMedia({audio:true, video:true}) .then(stream => yourVideo.srcObject = stream) .then(stream => pc.addStream(stream)); } function showFriendsFace() { pc.createOffer() .then(offer => pc.setLocalDescription(offer) ) .then(() => sendMessage(yourId, JSON.stringify({'sdp': pc.localDescription})) ); }

As it says, var RTCPeerConnection = window.webkitRTCPeerConnection; makes sure that this code works with different browsers. The next lines of code allow you to access your Firebase. As stated above, please create your own Firebase account, and replace these credentials with yours.

var database = firebase.database().ref(); gives you access to the root of your Firebase database. database.on('child_added', readMessage); makes it so that if you add something to the Firebase database by calling sendMessage , it will automatically get read. In other words, each message that gets inserted into the Firebase database, will be read because readMessage is called whenever Firebase detects newly inserted data. Then we set yourVideo and friendsVideo to the video elements found in the HTML.

We give the user a random id which helps us differentiate between the two users. When we send data (Offer, Answer, and ICE Candidate objects) from your computer to your friend’s computer, your friend needs to receive them. And he will because you will send them through Firebase. However, Firebase will not only deliver it to your friend. It will also deliver it to you. Obviously you don’t need it delivered to you because you already have those objects since you created them. That’s where yourId comes into play. Say you send your friend Offer and ICE Candidate objects that you create. Firebase will send those objects to your friend and to you. What you need to do is check to see who sent the message. If the sender has the same Id as you, then just ignore the message. The same dilemma exists for your friend, because he needs to send you Answer and ICE Candidate objects that he creates. Take a look at the readMessage function. You’ll notice if (sender != yourId) is wrapped around most of that function. That means that we won’t read the Firebase messages unless they’re sent by the other person.

Right after we generate a random user Id, we declare the servers that we will use. You’ll notice that we include two STUN servers (Google and Firefox) and one TURN server. You can add as many STUN and TURN servers as you like. If a STUN server doesn’t work, then WebRTC will try the next server, which is why you should add several. STUN servers are cheaper than TURN servers, which is why Google and Firefox allow anyone to access their STUN servers for free. TURN servers are harder to find for free, but they do exist. You can set up your own STUN and TURN servers if you don’t want to use the STUN servers that Google and Firefox provide.

var pc = new RTCPeerConnection(servers); creates a PeerConnection object on your computer when you open the CodePen demo. It also creates a PeerConnection on your friend’s computer when he opens the CodePen demo.

pc.onicecandidate = (event => event.candidate?sendMessage(yourId, JSON.stringify({'ice': event.candidate})):console.log("Sent All Ice") ); waits for an ICE Candidate object to be created on your computer. once you call setLocalDescription later on in the code, several ICE Candidates will be created. That means that this function will be called several times, once for each ICE Candidate created. When you create an ICE Candidate, this function turns the object into a string. It then sends the string to your friend via Firebase. Your friend’s computer will do the same. In other words, you send him all of your ICE Candidates one at a time, and your friend will send you all of his ICE Candidates one at a time. When you and your friend receive an ICE Candidate in string form, delivered by Firebase, you need to convert the string back into an ICE Candidate object with JSON.parse(data.val().message); . Then you need to add the ICE Candidate to your PeerConnection by calling pc.addIceCandidate(new RTCIceCandidate(msg.ice)); . Your friend needs to add ICE Candidates you send him to his PeerConnection by calling that same function.

pc.onaddstream = (event => friendsVideo.srcObject = event.stream); waits for all of the objects (Offer, Answer, ICE Candidates) to be sent. Then your friend’s video (MediaStream object) will be available to you, and your video (MediaStream object) will be available to him. The onaddstream event will be called, and you can set friendsVideo.srcObject to that MediaStream object. This will display a video of him on your computer, and a video of you will show up on his computer. Remember that friendsVideo refers to the HTML video element.

Let’s skip to the showMyFace function. The code for this function is very short.

function showMyFace() { navigator.mediaDevices.getUserMedia({audio:true, video:true}) .then(stream => yourVideo.srcObject = stream) .then(stream => pc.addStream(stream)); }

When you call getUserMedia , your browser asks for permission to access your camera. This will return a MediaStream object, which you can set yourVideo.srcObject to. Those two lines show a video of you on your computer. Then you need to add that same MediaStream object to your PeerConnection object. Your friend needs to do the same. This function gets called as soon as the page loads, so you’ll see your face once you load the page.

Once you and your friend have the CodePen demo open, you need to press the call button. This will call the showFriendsFace function.

function showFriendsFace() { pc.createOffer() .then(offer => pc.setLocalDescription(offer) ) .then(() => sendMessage(yourId, JSON.stringify({'sdp': pc.localDescription})) ); }

You create an Offer object by calling pc.createOffer() . This will return an Offer object. Set your local description to this offer by calling pc.setLocalDescription(offer) . Finally send that Offer object to your friend by calling sendMessage .

Your friend will read the message because of the readMessage function. Since the message type is an offer, the following lines of code from readMessage will be executed:

pc.setRemoteDescription(new RTCSessionDescription(msg.sdp)) .then(() => pc.createAnswer()) .then(answer => pc.setLocalDescription(answer)) .then(() => sendMessage(yourId, JSON.stringify({'sdp': pc.localDescription})));

So you just sent your friend an Offer object that you created. He will set his remote description to that Offer object you sent him by calling pc.setRemoteDescription(new RTCSessionDescription(msg.sdp)) . Then he will create an Answer object by calling pc.createAnswer() . This function returns an Answer object which your friend will set his local description to. He does this by calling pc.setLocalDescription(answer) . Then he takes that Answer object and sends it to you by calling sendMessage .

Now you will read the message because readMessage will get called. Since the message type is answer, the following lines of code from readMessage will be executed:

pc.setRemoteDescription(new RTCSessionDescription(msg.sdp));

Once you call setLocalDescription, you generate ICE Candidate objects, which you need to send to your friend. The onicecandidate callback that we talked about before will send these ICE Candidates to your friend one at a time. Here is that code in case you forgot:

pc.onicecandidate = (event => event.candidate?sendMessage(yourId, JSON.stringify({'ice': event.candidate})):console.log("Sent All Ice") );

Your friend will read the message because readMessage gets called. When you send him an ICE Candidate, the following line of code from readMessage gets called:

pc.addIceCandidate(new RTCIceCandidate(msg.ice));

This adds the ICE Candidates you send him to his PeerConnection object. When he calls setLocalDescription , he also generates ICE Candidate objects that he sends to you. You take those ICE Candidate objects and call pc.addIceCandidate(new RTCIceCandidate(msg.ice)); to add them to your PeerConnection.

At this point the WebRTC connection is complete, and onaddstream gets called:

pc.onaddstream = (event => friendsVideo.srcObject = event.stream);

This line of code takes the MediaStream object that your friend sends you and adds it to your video element. This causes a video of your friend to show up on your computer. The same happens for your friend. Now your MediaStream is available to him. And onaddstream gets called for him. This causes a video of you to show up on his screen.

By now you probably understand that sendMessage adds data to your Firebase, and readMessage read that data. The sendMessage function takes a string (Offer, Answer, ICE Candidate objects after JSON.stringify is called) and inserts it into your database. Immediately after, it gets removed from your database. This is because as soon as you insert something in the Firebase, it gets read, so we don’t need it anymore.

That’s all it takes to make a working one-to-one video chat using WebRTC. You’ll notice that the demo code uses .then several times. This is called a promise, and it’s not specific to WebRTC. Promises are used in Javascript when appropriate. You don’t need to use .then (Promises) in WebRTC. You can just use regular callbacks, but Promises are easier to read. You’ll also notice that we use => in our JavaScript code. These are arrow functions, and are just shortcuts for the functions that you’re used to seeing. It’s similar to ternary operators, because you can write an if else statement, or you can abbreviate it using the ternary operator. The same is true here. You don’t need to use => . It just makes the code look more readable.

Conclusion

Congrats! You just finished building a video chat using WebRTC using HTML, CSS, and Javascript in under 80 lines of code. This demo is so short in order to make learning WebRTC easier. No plugins or libraries are required for this demo (aside from Firebase and Bootstrap). Bootstrap is used to make the demo look nicer. In this demo, Firebase is used to send and receive objects. This is called a signaling server. You don’t need to use Firebase as your signaling server. You can use Socket.io instead of Firebase as your signaling server. If you’re unclear about anything in this tutorial, or if you’d like for us to make a tutorial, please leave a comment below. We take the time to read each comment, and we’d like to hear from you. Hopefully you can now apply the knowledge you've obtained from this tutorial to your next website or mobile app.