Today we're building a camera app that uses WebGL for live preview saturation and brightness filters. It's like, Instagram but not.

Today we’re building a camera app with live preview saturation and brightness filters. It’s like, Instagram but not. There’s no “Take Pic” button. You have to take a screenshot. But filters!

You can turn it into an Instagram with these helpful WebGL shaders that @stoffern build. Oh yes, we’re using WebGL to manipulate camera output

In this lesson you will learn:

how to use the camera with react-native-camera

how to do WebGL with gl-react

rudimentary gesture detection with PanResponder

We’re building the app in five steps. You can see its finished source on Github.

Wanna build a new app every two weeks? Subscribe by email.

Step 0: Prep work

After running $ react-native init LiveInstagram , we need to do some prep. I think this is going to be the standard approach going forward … maybe I should make a boilerplate

But boilerplates are a trap. For the maintainer.

Here’s what you do:

1. Install Shoutem UI toolkit with $ react-native install @shoutem/ui . We’ll make light use of it in this lesson, but I’ve gotten used to having it available 🙂

2. Clean out index.ios.js . When you’re done, it should look like this:

// index.ios.js import React, { Component } from 'react'; import { AppRegistry, } from 'react-native'; import App from './src/App'; AppRegistry.registerComponent('LiveInstagram', () => App);

You can do the same with index.android.js , if you prefer to work on Android. I focus on iOS because I don’t have an Android device.

3. Create a src/ directory for all our code.

4. Make an App.js file with our base component. Something like this for now:

// src/App.js import React, { Component } from 'react'; import { Screen } from '@shoutem/ui'; export default class App extends Component { render() { <Screen /> } }

The benefit of moving all our code into src/App and gutting index.*.js is that it’s easier to share code between both platforms. Our LiveInstagram app doesn’t follow UI conventions of either platform, so we might as well use the same code for both.

Step 1: Camera app

We start with a basic camera app. Asks for camera permissions, shows a live preview of your camera.

Thanks to the @lwansbrough‘s great work on react-native-camera, this part is simple. After last week’s Firebase fiasco, I walked into this project expecting the worst.

Camera stuff was the easiest.

1. Install with $ react-native install react-native-camera

2. Set up asking for permissions inside Info.plist.

./ios/LiveInstagram/Info.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> ... <key>NSCameraUsageDescription</key> <string>We need your beautiful camera</string> <key>NSPhotoLibraryUsageDescription</key> <string>We're only saving to temp, promise</string> </dict> </plist>

We need a permission to use the camera and a permission to save photos to the library. We don’t save any photos so asking for library permissions sucks, but we need it to access temp.

react-native-camera used to allow taking pictures straight to memory, but that’s been deprecated. I don’t know why.

3. Render the camera view in App

// src/App.js import Camera from 'react-native-camera'; export default class App extends Component { render() { return ( <Screen> <Camera style={{flex: 1}} ref={cam => this.camera=cam} aspect={Camera.constants.Aspect.fill}> </Camera> </Screen> ); } }

We add <Camera> to our main render method. flex: 1 makes it fill the screen, a fill aspect ratio ensures it looks nice. We’ll use this.camera later to access the camera instance and capture photos.

Run the app on your phone with XCode and you’ll see what your phone can see. I love how easy that was. Thanks @lwansbrough.

Step 2: Render a WebGL test component

Now that we have a camera, we need to set up the WebGL component. It’s going to overlay the camera view with a GPU-rendered view with changed brightness and saturation.

We’re using gl-react v3 alpha by @gre and I have no idea how it works. I know it’s the WebGL API, but it runs on native code so it can’t be WebGL actually.

I’m happy the abstraction is so good that I don’t need to know how stuff works underneath ☺️

1. We have to install gl-react and gl-react-native . Make sure you have the next version because the old 2.x doesn’t work well. v3 is a complete rewrite.

$ react-native install gl-react@next $ react-native install gl-react-native@next

2. Our WebGL Surface needs to know how big it is, so we have to keep track of width and height ourselves. We can do that with an onLayout callback on our Screen component and some local state.

// src/App.js export default class App extends Component { state = { width: null, height: null } onLayout = (event) => { const { width, height } = event.nativeEvent.layout; this.setState({ width, height }); } render() { const { width, height } = this.state; if (width && height) { return ( <Screen onLayout={this.onLayout}> // ... </Screen> ); }else{ return ( <Screen onLayout={this.onLayout} /> ); } } }

onLayout is called every time our app re-layouts. We use the callback to read the new width and height , and save them to this.state .

3. We render the WebGL Surface inside the Camera component. This lets us overlay the Camera view, which is great. We’re rendering opaque things so the Camera is invisible anyway, which is wasteful.

// src/App.js import { Surface } from "gl-react-native"; import Saturate from './Saturate'; export default class App extends Component { // ... render() { // ... const filter = { contrast: 1, saturation: 1, brightness: 1 } <Camera style={{flex: 1}} ref={cam => this.camera=cam} aspect={Camera.constants.Aspect.fill}> <Surface style={{ width, height }}> <Saturate {...filter}> {{ uri: "https://i.imgur.com/uTP9Xfr.jpg" }} </Saturate> </Surface> </Camera> // ... } }

You can think of Surface as a canvas component. It’s a region where you render WebGL nodes. Every Surface needs at least one Node child.

Saturate is a Node component. It renders an image and adjusts its contrast , saturation , and brightness .

4. The code for Saturate comes from a gl-react example because I don’t understand GL shader code enough to write my own.

// src/Saturation.js //@flow import React, { Component } from "react"; import { Shaders, Node, GLSL } from "gl-react"; const shaders = Shaders.create({ Saturate: { frag: GLSL` precision highp float; varying vec2 uv; uniform sampler2D t; uniform float contrast, saturation, brightness; const vec3 L = vec3(0.2125, 0.7154, 0.0721); void main() { vec4 c = texture2D(t, uv); vec3 brt = c.rgb * brightness; gl_FragColor = vec4(mix( vec3(0.5), mix(vec3(dot(brt, L)), brt, saturation), contrast), c.a); } ` } }); const Saturate = ({ contrast, saturation, brightness, children }) => ( <Node shader={shaders.Saturate} uniforms={{ contrast, saturation, brightness, t: children }} /> ); export default Saturate;

Here’s what I do understand: For gl-react we define shaders statically. Each comes as a GLSL language blob and goes into a big Shaders dictionary.

I don’t understand the GLSL language yet. I know main() is called on every pixel, I know this code runs on the GPU, and I know that it looks a lot like C code. It also looks like it’s based on vector mathematics and matrix composition.

We’re setting our base image as a texture, I think. texture2D sure implies that much.

The Saturate component itself renders a Node , which takes a shader and something called uniforms. I’m not sure what those are, but it looks like they could be arguments for the GLSL code.

Step 3: Feed camera data into WebGL view

Great, we have a WebGL-rendered static image with funky saturation, and a live camera view hiding underneath. If that sounds bad performance-wise, wait ’till you see what happens next

We take a photo every 5 milliseconds, save it to a temp location, and update the WebGL view.

I came up with 5ms experimentally. It looks smooth and, with some tweaks to image quality, keeps the app from crashing for almost a whole minute. Yes, the app crashes. Yes, it’s bad. No, I don’t know [yet] how to fix it. Maybe someone will tell me in the comments

1. We start a timer in onLayout because we want to be sure this.camera exists. We’re rendering Camera only after we have width/height , remember.

// src/App.js class App { // ... onLayout = (event) => { // .. this.start(); } refreshPic = () => { // pic taking } start() { this.timer = setInterval(() => this.refreshPic(), 5); } onComponentWillUnmount() { clearInterval(this.timer); }

onLayout calls start , which sets an interval running every 5 milliseconds. We make sure to stop the interval in onComponentWillUnmount .

2. Thanks to @lwansbrough, taking a pic is as easy as rendering a camera preview was.

// src/App.js class App { // ... refreshPic = () => { this.camera .capture({ target: Camera.constants.CaptureTarget.temp, jpegQuality: 70 }) .then(data => this.setState({ path: data.path })) .catch(err => console.error(err)); }

We call .capture on this.camera , tell it to use 70% jpeg quality and to save images to a temporary location. Saving is necessary because direct-to-memory went away in a recent version of react-native-camera . This is the part that requires library access permissions on iOS.

When the promise resolves, we get a path to the local image, and save it to local state. If there’s an error, we cry.

3. Our camera preview is invisible. Turning it off would save resources and might even convince our app not to crash. But I couldn’t figure out how to do that. Reducing output quality is the next best thing.

Add captureQuality to <Camera> in render. Like this:

// src/App.js class App { // ... <Camera style={{flex: 1}} ref={cam => this.camera=cam} captureQuality={Camera.constants.CaptureQuality["720p"]} aspect={Camera.constants.Aspect.fill}>

You can play with values in the filter constant to see how they affect the image.

Step 4: Basic gesture recognition

React Native comes with a built-in class to detect user gestures – PanResponder. I don’t know why it’s called that.

We can use it to detect which direction a user is dragging on the screen. The API it offers is pretty low-level so there’s some manual work involved in figuring out the user’s intentions.

The basics go like this:

1. Start a PanResponder instance in componentWillMount .

// src/App.js class App { // ... componentWillMount() { this._panResponder = PanResponder.create({ onMoveShouldSetResponderCapture: () => true, onMoveShouldSetPanResponderCapture: () => true, onPanResponderGrant: (e, {x0, y0}) => { // start gesture }, onPanResponderMove: (e, {dx, dy}) => { // gesture progress }, onPanResponderRelease: (ev, {vx, vy}) => { // gesture complete } }); }

We create a new PanResponder instance and assign it to this._panResponder . To make it work, we define a bunch of callbacks.

onPanResponderGrant is called when a gesture begins, we’ll use it to initialize our tracking code. onPanResponderMove is where we’re going to track movement and update saturation/brightness, and onPanResponderRelease is where we can give the user some “done” feedback. We’re not going to need this.

2. We attach our pan responder to the main app view like this:

// src/App.js class App { // ... render() { const { width, height } = this.state; if (width && height) { return ( <Screen onLayout={this.onLayout} {...this._panResponder.panHandlers}> // .. }else{ return ( <Screen onLayout={this.onLayout} /> ); } } }

It’s that {...this._panResponder.panHandlers} part. I expect you can add gesture recognition like this to any React Native element. Neat, huh?

Step 5: Control saturation/brightness through gestures

Time to wire everything up. We’re going to move filter into local state and manipulate it in our pan handler.

1. We move filter into local state like this

// src/App.js class App { state = { width: null, height: null, path: "https://i.imgur.com/uTP9Xfr.jpg", contrast: 1, brightness: 1, saturation: 1 } // ... render() { const { width, height, brightness, contrast, saturation } = this.state; const filter = { brightness, contrast, saturation } // .. } }

That part was quick. Instead of using magic values in const filter = , we take them out of this.state .

2. We initialize our starting state when the gesture begins. We’ll use D3 linear scales to help us with the maths of “how much did the user drag and what does that mean for brightness”.

Run $ react-native install d3-scale . No need for the whole D3 library, the scales will do.

You can think of a linear scale as a linear function from middle school maths. It maps values from a domain, to values in a range.

// src/App.js import { scaleLinear } from 'd3-scale'; class App { // ... dragScaleX = scaleLinear() dragScaleY = scaleLinear() componentWillMount() { // ... onPanResponderGrant: (e, {x0, y0}) => { const { width, height } = this.state; this.dragScaleX .domain([-x0, width-x0]) .range([-1, 1]); this.dragScaleY .domain([-y0, height-y0]) .range([1, -1]); }, // ... }); }

When a gesture starts, we initialize our dragScaleX and dragScaleY with a new domain and range. The range goes from -1 to 1 because we’re adding it to the default value of 1 . 0 to 2 should give us a full range for saturation and brightness.

The domain is trickier. We want the screen to represent a full range. All the way up, brightness is at 2 , all the way down, it’s 0 , middle is 1 .

But we only get drag distance inside onPanResponderMove , so we have to set our domains based on start position. If we start at x0 that means we’ll reach the left edge of the screen when we drag in the negative direction by a whole x0 . That’s -x0 . In the other direction, we have width-x0 of space before hitting the edge.

That makes our domain [-x0, width-x0] . Similarly for the vertical axis: [-y0, height-y0] .

3. We use these scales to change brightness and saturation in onPanResponderMove . Like this:

// src/App.js import { scaleLinear } from 'd3-scale'; class App { // ... componentWillMount() { // ... onPanResponderMove: (e, {dx, dy}) => { this.setState({ saturation: 1 + this.dragScaleX(dx), brightness: 1 + this.dragScaleY(dy) }); }, // ... }); }

We take drag distance on both axes, translate it into the [-1, 1] range using the scales, and update local state with this.setState .

This triggers a re-render, updates the WebGL view, and shows the user an updated preview. Fast enough to feel responsive.

Wanna build a new app every two weeks? Subscribe by email.