Huawei Watch 1 running the pipeline

Consumers today demand simplicity and accuracy, a “plug&play \ simply works!” experience. So, in order to get the experience good enough for today’s expectations from electronics, Mudra must perform almost flawlessly. Fortunately for us, there are amazing open-source software solutions for building futuristic technology. I’d like to talk about some of these technologies and explain how we got them to work on Android Wear with low-compute capabilities, without sacrificing latency or performance in general.

The fun part of developing technology from the ground up is that you can control the entire process, from designing the analog sensors, to customizing data acquisition and annotation all the way up to the final experience. The software design must be performant enough to run on a smartwatch, which is the only constraint. To this end we have at our disposal several amazing tools.

I’d like to talk about 3 such tools that I find groundbreaking (especially for IoT):

Tensorflow 2.0 \ Tflite Eigen c++ Algebra library Ctypes

Lets start with Tensorflow 2.0. Our pipeline includes various stages, some deep learning based and others not. We use deep learning for calibrating Mudra to a specific user and for accomplishing various tasks, in the spirit of HydraNet. The advantage of this approach is twofold: We can achieve higher accuracy by sharing weights between tasks and keep the computation overhead low. Such models can be built using the advanced model subclassing API.

Tensorflow 2.0 works very well with tflite, which works like magic on smartwatches. It also runs very quickly on some micro-controllers we’ve tested, including STM32h743. Our models are blazingly fast even on a single-core ARM cortex m7, which is a modern, low power micro-controller with great tflite support. This is very helpful since the Apple Watch 3 introduced standalone cellular connectivity so the compute capabilities are constrained. The ability to deploy a model and run the same ops without the need to code it yourself enables the flexibility startups need to deploy cutting edge models quickly and iterate on such success.

Second, we utilize the DSP NEON core for various operations in c++, including our custom Wavelet Decomposition. Wavelets are a great method to analyze non-stationary processes. They usually outperform other methods when you have a very specific form or “basis” (for example, radiology images and aerial photography, not only bio-potentials…). To get this decomposition right, we love the open source Algebra library Eigen. It has a great API and allows you to write highly readable code which runs across multiple architectures, including the ARM architecture. This makes it a great candidate for many smartwatches (a lot of which are based on the ARMv7/m7 architecture). One can use this library as a powerful pre\post-processing tool.

Third, we really love to simulate our algorithms accuracy. Anyone who works on such simulations, especially data science projects, knows how tightly coupled these simulations are with Python. Python has become the go-to-language for simulation and visualization. However, what do you do when you’re relying on high-performance \ low latency c++ code for embedded application? A great tool for saving time without the need to write two separate code bases (i.e. simulation vs embedded) is ctypes.

Ctypes is an amazing tool. You write your code once in c++. Then you wrap it into a .dll or .so library, and access this library in Python. Ctypes gives you the best of both worlds, high performance code with the ability to plot visualizations, create custom analysis tools and the ability to read various database files quickly and easily. I find it a much easier tool than alternative methods, such as Boost Python, since no c++ wrappers are necessary.

The above tools are used by many people in the tech space. However, at Wearable Devices we use them all and quite extensively, including all the advanced features. Since each hardware manufacturing iteration is costly, we collect our data in-house and simulate carefully all aspects of performance and accuracy that might affect the user. Luckily, we can rely on the availability of some wonderful open source tools to drive us towards success.