\$\begingroup\$

This is more of a blog post, but here goes:

If you write in C, you can compile your code and run it on your desktop. This won't test the low-level hardware drivers, but can test all the logic code.

Recommendations on this approach:

If you're on ARM, you can use the same compiler (gcc) for both purposes. Then any compiler-specific extensions will continue to work. But compiler-specific extensions are undesirable, so you might compile the desktop version with Clang/LLVM instead (with the benefit of better error messages). My desktop is a Linux PC. Nothing in this approach is Linux-specific. But Linux sure makes a nice C development environment. Use stdint.h and types like uint64_t , instead of " unsigned long int ". This lets you get the same behavior when compiled on different systems. But beware of C's integer promotion. If possible (it is on ARM) use a 32-bit PC to test 32-bit ARM code. Regarding hardware drivers, I've found two approaches to be beneficial. Make a live prototype on the PC. The PC has a real-time clock and a network connection and a display and inputs, so those are taken care of. (My PC even has an accelerometer.) You can use libSDL to animate a picture of your final product and receive keypresses. For other functions of your board connect up development boards, or fake it. The advantage of this approach is that you can use it in the live system and save yourself the trouble of making hardware if you find it doesn't solve the problem you need solved. Make a dead prototype that reads input events from a file and writes output events to a file. Now for testing you can record (or synthesize) events that correspond to specific scenarios you want to test. And then you can verify that the right things are written to the output. The advantage of this is it leaves a full automated test suite that you can run at build time to catch regressions. Use -Wall -Wextra -Werror compiler options and keep your code warning-free. You'll spend some time patching warnings, but it makes the coding go faster by reducing debugging time. Compile the PC version using mudflaps. This catches a lot of pointer mischief. Some folks recommend Valgrind, but it's not as useful for code that never uses malloc() or free() . For PC debugging, use GDB or DDD or printf() , to taste. There are a large variety of mature and useful desktop debugging tools available. Even if I don't do the full PC debug setup, I will often include a unit-testing main() function at the end of a file that is #define 'd out. This main() tries any tricky functions in the file and returns 0 for all-pass or asserts() for fail. Then I add a "test" target in the makefile that compiles and runs each of the individual files' tests. There are many unit testing platforms out there that you can use. There are so many because they are very easy to write. I don't use them. I don't want a pretty report of "percent tests passed." I want my test to die at the first failure. Visual Basic used to have a laughable feature called 'on error resume next'. I don't know why you'd want that feature for unit tests.

If you do standalone unit-testing this way, you will find that you have very little need for on-chip debugging. In addition, you will find porting to different hardware platforms is a lot easier because the core logic and hardware drivers are well separated.