BSP (Board Support Package) and device drivers are the heart of every embedded software project.

To start working with sensors or peripherals on your device you must have some sort of device drivers to begin with. Depending on your personal taste, coding drivers and designing hardware abstraction layers is a task that people either love or hate. Mastering the sequences of SPI or I2C information flow for example requires delicacy and mastery. Writing the code that interacts with hardware requires reading through long data sheets, carefully combining byte after byte to make the thing work and usually after it works the R&D team will avoid any changes to this code.

Writing tests that cover this part is particularly tricky. Automating these tests is even harder and in most cases developers end up with manual testing to the device drivers, eliminating future ability to have code refactoring. We’ll discuss both test writing and how to automate these tests in this post (check out these five steps for moving from manual to automated tests).

Why test your drivers? Why automate?

Writing the drivers code is probably one of the first things you do in an embedded project right after initializing the processor and UART. There’s typically an exploratory phase where you just want things to work and get some data from the sensor or device. You get the device ID, and go on to receiving measurements or data from the device.

From this point the code typically doesn’t change much. Writing tests and running them automatically doesn’t make sense for code that doesn’t change much. Until it does change. Bugs in these areas of the code are notoriously expensive to fix for 4 reasons:

Typically, it’s been a while since you looked at the driver code. Sometimes you didn’t even write the driver yourself. The issues involve sensors/devices you’re debugging from the outside (using a logic or scope). RTOSes and other software components interfere with driver communication.

All these issues raise the cost of solving a bug. It’s highly recommended to write unit tests and integration tests for these areas of the code. If you have these tests, run them automatically in a Continuous Integration system at least once a day.

When the day comes to rewrite some of the driver code, you’ll have the testing infrastructure in place to verify that the application works properly. It’s a great advantage to be able to refactor code with confidence. Things like switching to new MCU SDK, a new version of RTOS or a just refactoring some application code out of the driver code can be made a simple task when proper testing is in place. On the other hand, when you lack the ability to refactor your code, you might stuck the following:

New business or sales initiatives. Closing security breaches and zero days vulnerabilities. Improving device performance. Timely apply patches for post production bugs.

OK, I’m on board and would like to test my driver

Let’s start with answering a key question about how to write driver tests.

Where should I run these tests?

It’s possible running unit tests for embedded code is on x86 (or x86_64) machines. It enables easy automation using Continuous Integration systems. That’s typically not the case with testing drivers — because the hardware part of the driver is neither easily mockable nor accurate on x86 (learn how to use Segger for test automation). You can mock the drivers on the Hardware Abstraction Layer (if it was designed and implemented in such a way that will enable code modularity), just before it uses things like the SPI or I2C module on your device.

That would typically not work. Often the drivers depend on more then just functions and return values — you often need things like interrupts to get things to work. Take a look at the image below for an example of the software, firmware and hardware components that are part of the flow of a software interaction with an external peripheral.

That leaves 2 options for test execution: (1) on target device or (2) on a hardware simulator.

Running tests on the target device

In order to run on the target device you’ll need the following tools:

Development board with sensors and all, connected to a PC.

Command line interface to flash the test firmware, and read the UART prints from the device (to tell when the test is done or failed).

Testing harness with testing equipment so you’ll be able to trigger and monitor I/Os (this is tricky when you want to push buttons, test acceleration, GPS location, temperature and more)

Let’s assume your driver has the following functions: GetDeviceID and GetMeasurement. Your test code should look something like the following:

Your test script will then do the following:

Load the firmware to the device. Drive inputs, using the testing harness, so measurements can be pushed into your device. Listen to the UART prints and wait for “Test is done!” Reset the device. Analyze results on UART prints.

To run this automatically, you’ll need some sort of a hardware test lab:

A development board and sensors. Signal generators, logic analyzers, etc. for triggering and monitoring I/Os. A PC with a testing harness to orchestrate the test setup and test scenario (Segger tools might help here).

On top of that you’ll need to:

Allocate resources for maintenance of this setup with all it’s moving parts. Deal with noisy environment and unreliable HW that cause variation of test results for the same test and SW. Log and monitor device traces for post test analysis

That’s a lot of stuff to take care of… Take a look at the picture below for a on target device testing (taken from Chromecast Testing session held at the Google Testing Automation Conference — slides and video)

Let’s talk about another option — using simulators/emulators.

Running tests on a simulator

Running automated testing on simulator has it’s benefits. It’s a virtual device that allows you to test your software in a flexible clean hermetic environment. With simulators you eliminate flakiness of hardware and complex test environment, while enjoying scalable testing solution that allows easy collaboration across teams.

But, simulators are not always available, require some effort for supporting your exact setup and most importantly require a paradigm shift — can you really trust emulator to test your hardware? (but we’re not here to talk about paradigm shifts :) )

In order to run on a simulator you’ll need the following tools:

Simulation software that supports your MCU.

Peripherals and sensors mocking framework.

SDK that allows writing test scripts.

Note that without external peripheral models/mocks and a test SDK you won’t be able to automatically test your device drivers as part of a continuous integration process.

One of your options is QEMU, let’s review the pros and cons of QEMU with regards to MCU testing:

Pros

Free and open source project. Community development can improve the solution over time. Designed to support many CPU architectures. Great when it comes to running Linux on many types of CPUs. Mature and stable solution.

Cons

Lacks full models of internal MCU peripherals such as UART, GPIO, NVIC and other critical peripherals. Slow run time due to TinyCode translation. Solution is neither flexible nor scalable and doesn’t allow simulation extension for a full embedded system with ease. Lack API/SDK for writing tests scripts and running tests — unit, integration and system tests.

Head to the GNU MCU Eclipse QEMU to learn more or if you are using the STM32 checkout QEMU for STM32. Once you have a model of your MCU with the internal peripherals you are using, you’ll need to mock/model your device’s sensors and build an API/SDK to enable testing your simulated device using testing scripts.

So how do you mock/model peripherals in QEMU?

Build a serial interface model to communicate with the simulated peripherals: UART — configure the internal UART peripheral to communicate over a serial port. I2C and SPI — patch the I2C or SPI peripheral to communicate with your process outside QEMU. Model/mock your external peripheral logic.

As GNU MCU Eclipse QEMU is less suitable for test automation, let’s explore our own solution — Jumper Virtual Lab. Jumper Virtual Lab was built for test automation and continuous integration. Jumper comes baked with:

Models of multiple ARM Cortex M based CPUs such as STM32 and nRF52 families of MCUs. (Check out this sample for STM32 continuous integration process powered by Jumper Virtual Lab). Predefined models for various common external peripherals (flash, accelerometers and environmental sensors). A Python SDK that allows you to write test scripts for test automation.

The following block diagram illustrates Jumper Virtual Lab setup for testing a device driver with all the relevant component of the system virtualized. Jumper’s Python SDK allows seamless integration to any automated test runner. Open a free Jumper Virtual Lab account to check it out (currently supports the nRF52 and STM32).

The Jumper Virtual Lab also provides a set of APIs to quickly and easily mock external peripherals so you’ll be able to have a tailored made virtual device the fits your physical device.

To mock/model peripherals with the Jumper Virtual Lab, you can use the following tutorial in this link to learn how to mock SPI device. One of the coolest features of the Jumper Peripheral Modeling Framework is the BSP validation feature. As a part of the peripheral modeling, you’ll be required to provide configuration details about the peripheral being modeled. Clock polarity and phase, supported frequency and bit order are examples of such configurations. The simulation will then verify that the firmware configured the MCU SPI peripheral accordingly. This small delightful feature can save hours and days of debug time trying to figure out why the device is not connecting properly.

What cannot be done with a Simulator

Emulators and simulators are super valuable when it comes to automating the testing process for an embedded system and running many test scenarios. They’re much more flexible then the target device, you can monitor and analyze errors rather easily and you can run as many of them as you’d like. It’s recommended to use a simulator every time your code changes as a gatekeeper for your application logic and overall drivers quality before going on the real hardware.

But they don’t fully replace testing on the actual hardware. With simulators, real-time issues might get close to replacing what happens on the real device, but it’s almost never 100% accurate. You should always verify your driver code on the target device prior to deploying a version.

Let’s wrap things up

So how should your testing process for device drivers and HAL look like?

To enjoy code refactoring for device drivers with confidence we think you should combine these three methods:

Unit testing of your driver logic, either on simulator or on regular x86 machine (AKA non-host testing). Integration testing of your HAL, device driver and peripheral, executed on a simulator. Hardware in the loop testing on the target device to verify hardware and delicate timing issues.

In the end it’s all about using the right tool for the job. We know it requires some initial effort with building this infrastructure, but when time comes to changing your code fast and meeting meet with the market demand — it’s totally worth the effort.

Take a look at the following for device driver testing flow as part of your continuous integration process (you’ll want to run minimum amount of tests on the target device — hardware in the loop tests). We recommend running automated unit tests for your device drivers after each git push, integration tests after each build and hardware in the loop tests before version deployment.

Interested in trying Jumper?

If you want to have an easy way for testing your physical device’s embedded software, you’re invited to try Jumper for free.