One of the most exciting strengths of React Native is that it allows you to bridge native calls to Javascript. In contrast to other frameworks that pre-define what can be executed on the native side React Native is easily extensible.

With great power comes great responsibility

The React Native community is hard to compare with others, you have a lot of web developers who love that they are given the possibility to code a native app, without going through learning iOS or Android (count me in here). You also have native developers who love React Native for its fast development cycles but are relatively new to Javascript.

This has lead to a couple of problems with React Native packages containing native code. Often they are iOS or Android only, leaving the user in the need to pick multiple libraries to do the same job. Another problem is the API of native modules. Some are great, but others are simply using the primitives React Native provides, e.g. the NativeEventEmitter , without exposing a nice to use developer interface.

When I wanted to write a little demo using the Gyroscope sensor of my device I stumbled on both of the problems. As there were already libraries out there doing the job on one of the native sides I decided to combine them. Sprinkle a little RxJS on it and boom: react-native-sensors was born.

What does it provide?

react-native-sensors provides a RxJS based interface for Accelerometer, Gyroscope, and Magnetometer. It’s API is consistent with the different sensor types and platforms. Is it easy to use? You will see:

Let’s go through the important parts:

Line 20: Create a new accelerometer. This returns a promise, which fulfills if the sensor is available. We subscribe to the observable and set the state.

Line 33: In the render method we just access the state to show us the raw sensor data, which looks like this:

What are the possibilities?

The project I wanted to do when I first developed this library was fairly simple: I wanted to use the gyroscope to let the user interact with an image that is too wide to display on the screen entirely. So instead of using gestures to go left or right, I wanted to pan my phone and just see the other parts of the image.

As you can see the example is quite similar. To calculate how much I would like to move the image I just add up the sensor values for the y coordinate, so that if you leave your phone tilted to the left you get further left in the image.

In the render function, I use translateX to position the image in the middle of the screen and then add the calculated offset to see where our sensor data should move us to. By dividing the sensor value by 10 I can make the movement more smoothly, you can play with this value to see how it affects the behavior.

The only difference in the construction of the sensor I made is that I added a higher update interval to make the feeling more fluid when using harsh motions. Here you can see how the end result looks like:

This was way too hard to capture….

Wow, working with sensor data seems easy, where can I start?

You can go and checkout react-native-sensors on GitHub, it’s all Open Source and ready to use. The examples I have shown can be found in the examples folder. If you need real-world examples, we maintain a list of Open Source projects using this library, so go ahead and see how they do it.

So go ahead and check out our new website: react-native-sensors.github.io

How can I help?

You like the project and you would like to make it even better? There are plenty of ways you can help: