Since ipywidgets 7.4 we have two new widgets: the Audio and Video widgets which made it easy to do image/audio processing in the Jupyter Notebook and Jupyterlab.

Like the Image widget, the new Audio and Video widgets synchronize the binary data between back-end and front-end. You can easily manipulate this data with your favorite library (OpenCV, scikit-image…) and update the widget value dynamically.

Edge detection using OpenCV on a Video widget

Those two widgets have been nice building blocks for creating the ipywebrtc library. ipywebrtc has been created by Maarten Breddels (Maarten is the author of the awesome libraries vaex and ipyvolume). It uses the power of the WebRTC browser API to allow video streaming inside of the Jupyter Notebook.

The API of ipywebrtc is very simple: first, the user would create what we call a MediaStream widget. A MediaStream widget can either be:

A WidgetStream widget, given ANY input widget

widget, given ANY input widget A VideoStream widget, given a Video widget as input

widget, given a widget as input An ImageStream widget, given an Image widget as input

widget, given an widget as input An AudioStream widget, given an Audio widget as input

widget, given an widget as input A CameraStream widget, which creates a video/audio stream given user’s webcam

Using the MediaStream widget, you can:

Record a movie, using the VideoRecorder widget

widget Take a snapshot, using the ImageRecorder widget

widget Record audio, using the AudioRecorder widget

widget Stream it to peers using the simple chat function

As for other widget libraries, you can try it live right now just by clicking on this link. You’ll be able to try all of those workflows.

Say you want to perform image processing on the fly using a camera linked to your computer and run a face recognition, an edge detection or any other fancy algorithm. It’s really easy to implement using ipywebrtc. All you need to do is create an instance of a CameraStream widget, create an ImageRecorder given the camera video stream as input and implement your callback that processes the image (using scikit-image for example).

Creation of an ImageRecorder taking snapshots of the CameraStream, and process images on the fly using scikit-image

Another nice feature of ipywebrtc is the ability to create a MediaStream widget from ANY widget. This means that you can easily record images and videos from your favorite widget library for 2-D or 3-D data visualization (here ipyvolume).

Create a WidgetStream with an ipyvolume widget as input and record a video using the VideoRecorder

Once you played with those nice features of the library, you can download the videos/images that you created. Alternatively, you can share them directly using the chat function. This function takes a chat room name and a stream that you want to share (by default a CameraStream) as inputs, and allows you to turn your Jupyter Notebook into a conference room!

Chatroom created live with ipywebrtc during a presentation at PyParis

You can find the examples used to make those images on Github: https://github.com/QuantStack/quantstack-talks/tree/master/2018-11-14-PyParis-widgets/notebooks

About the Author

My name is Martin Renou, I am a Scientific Software Engineer at QuantStack. Before joining QuantStack, I studied at SUPAERO. I also worked at Logilab in Paris and Enthought in Cambridge. As an open source developer at QuantStack, I worked on a variety of projects, from xsimd and xtensor in C++ to ipyleaflet and ipywebrtc in Python and Javascript.