The Embedded Muse

Issue Number 277, February 2, 2015

Copyright 2015 The Ganssle Group



Editor: Jack Ganssle, jack@ganssle.com

You may redistribute this newsletter for noncommercial purposes. For commercial use contact info@ganssle.com.

Contents

Editor's Notes

Did you know it IS possible to create accurate schedules? Or that most projects consume 50% of the development time in debug and test, and that it’s not hard to slash that number drastically? Or that we know how to manage the quantitative relationship between complexity and bugs? Learn this and far more at my Better Firmware Faster class, presented at your facility. See http://www.ganssle.com/onsite.htm. We had a number of submissions for the contest for a FRDM-KL25Z development board. The winners, picked using neither scientific methods nor discernable methodology, are Bryan Murdock and Claude Galinsky. Bryan wrote: I am a scoutmaster and, to my surprise, my troop recently got really excited about the new Programming merit badge. We are due to start work on it in a month or so and it would be really fun to have them do some (simple) embedded software using this. I've written on my blog that no matter how long I'm in this business getting my code to blink some new system's LEDs always fills me with child-like giddiness. It would be very satisfying to see these young scouts experience the same thing. Claude was one of several with solutions for cat-induced tribulations:

My automatic cat feeder lacks an important feature. It can only dispense a meal to Poncho the Wonder Cat at intervals of 6, 12 or 24 hours. But if I need to feed Poncho at 6PM CST (Cat Stomach Time) and I want to set the feeder at 2PM or 10AM, the Wonder Cat and I are out of luck.



I'd use the FRDM board to control the feeder's motor, replacing the timer with a more sophisticated scheduler, programmable either via the capacitive slider or over the USB connection from a PC.



It's essential to keep one's Wonder Cat well-fed and happy. Otherwise they tend to shred one's curtains and leather furniture.

It's too late for my sofa, but not too late for the next one.

Quotes and Thoughts

"Science may be the theater, but engineering is the action on the stage." - Henry Petroski



Tools and Tips

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past. Michael Vowles suggested this Android app: Thought I would share an invaluable app I have found for Android called EveryCircuit. It offers a full featured circuit simulation program showing waveforms (voltage and current) and since a recent update you can now do frequency analysis. You can also run it from a chrome web browser but I have found the performance sub par. They have recently added everyone’s favourite IC the 555 timer to the selection of components to boot. I don’t see it replacing SPICE variants, however to quickly validate smallish circuits it is a great tool.

Google Protocol Buffers

Martin Szinger had some first-hand information on Protobufs: In Issue 276, Jonathan Seidmann asked for feedback on Google Protocol Buffers in an embedded environment. I think I can offer some useful insights. My company, Carma Systems, makes an automotive platform telematics device of the sort that's becoming more common these days - it plugs into a vehicle's OBD-II port and communicates via M2M module to "the cloud", which in our case is a stack of backend services built in Java. We chose to use Google Protocol Buffers, or Protobufs for short, as the underlying protocol for this device-to-server comm channel. I am the firmware & tools engineer responsible for the device-side implementation. First off, I should say that the Google-supplied reference code is pretty good out of the box, but you need to build it yourself and it's gcc-based. Our device chip is in the Microchip dsPic family, and our build chain is hosted on Microsoft Windows, so I ended up doing all my Protobuf support work in Cygwin. This wasn't a major heartache, and the Google code did build cleanly on a fresh Cygwin install. Still, some might be put off at the added steps. Now, the thing you're building in Cygwin is the Protobuf Compiler, the tool which takes your .proto format input files (specs) and turns out usable .c and .h files from them. I was not overly fond of the naming convention / presentation of the output - it stood out in an unfavorable way against our in-house coding standard and style. I ended up taking a few days modifying the source of the compiler to generate output that was a much better fit. This was a totally optional step, of course, and it's hard for me to asses whether it was "worth it", except to say that this was three years ago, and that we're still using Protobufs as our comm backbone, and that it's always felt nice that entire time to see auto-generated code that sits so nicely with all our human-authored stuff. The other thing that you get with Protobuf is a module of C code to be used on the runtime client, that works to compress and decompress protobuf binary on-the-wire messages into those nice autogenerated C structs. This, too, I heavily reformatted to fit our coding standard, not to mention some of the facts of life on the dsPic, which is a 16-bit micro with limited ram (low hundreds of kBs). So here's where we get to what I feel is probably the major concern of using Protobufs: even with generally "simple" protobuf definitions, the decoding process on your client is not, generally speaking, memory friendly. If you read through the code, or better yet walk it through in your debugger, you'll pass through long chains of small memory allocs as it parses its way through a protobuf binary and decodes all the values. The "profile" of this activity, as one might guess, is a very pronounced hit in memory that lasts for a very short duration. If you're on a micro with memory in the hundreds of bytes, it's probably not going to work at all. On our dsPic, adding Protobufs was the thing that pushed us to the place of having to write our own dynamic memory manager. This is something that became more-or-less inevitable as the product picked up more and more features over the last three years, so it turned out to be the right choice for us. If your product spec is nicely locked down and you were about to pull off writing the whole thing with no memory manager, adding Protobufs might just ruin your day. On the other hand, if you are running some new-fangled high-power SoC with megs and megs of memory, you'll never be bothered by this. (Under extreme need, you could re-write the decoder to do two-passes, one just to asses the total memory needed to hold the final output, and then one to actually do the work. This would allow you to decode a message into a single malloc'd block or even a static memory location. But I don't think this is an undertaking for the faint of heart. We considered it, and decided the energy would be better spent adding robustness to our memory manager...) There is a second issue regarding using Protobufs that I think also bears mentioning: there's no such thing as a "wrong" binary message. The whole point of Protobufs is to absolutely minimize message size over the wire, achieved by pre-loading each side with a matching set of message definitions. There is simply no redundancy or error checking in the packed binary format. This means that if you do not zealously take care to ensure that the .proto definitions on client and server software are in sync, you are NOT going to crash and burn at the point of message decoding - you're just going to decode into completely legal, yet erroneous, content. Which will probably cause you to crash and burn at some point outside of your Protobuf code, which can lead to a lot of head-scratching and swearing, possibly. We learned these lessons the hard way, so hopefully they are useful to share. There's a lot of power in Protobufs and they've served us well. In fact, there's a whole Remote Procedure Call (RPC) mechanism built into the protocol, which we decided not to make use of, but I can see how it might be really valuable to certain classes of devices. Jon Titus suggested CAN: Why not try the CAN (Controller Area Network) protocol? Many MCUs include a built-in CAN controller, and CAN libraries make it easy to create code to send data back and forth over long or short distances. As data rates increase, maximum specified bus length decreases. I'm running a CAN bus between an mbed board and a Cypress PSoC-5LP development board. A couple of TI chips provide the interface between the differential CAN bus and the MCU CAN hardware. CAN has built-in error detection, timing-error detection, CRC, device addressing, and a lot more. Worth a look. Cypress Semiconductor, Microchip Technology, Texas Instruments, NXP and others have MCUs with one or more CAN ports. If someone needs a CAN port but their MCU lacks one, the Microchip MCP2515 stand-alone controller, with SPI interface to a host, would do the job. I'm just getting started with CAN, and it looks good for many applications. Microchip has an inexpensive ($US 99) CAN Bus analysis tool. Other commercial tools--albeit with more sophisticated software--cost thousands. Luke Peterson wrote: I saw the note from Jonathan Seidmann in the tools & tips section today. I used NanoPB and Google Protocol buffers for two projects, one was my PhD research (robotic bicycle) and the other was at a company in SF working on an electric 2 wheeled self-balancing vehicle. NanoPB works great on the embedded side and is super simple to get set up. On the desktop/laptop side it is pretty easy to read the files and visualize/analyze with tools in C++/Java/Python. My basic setup for the robot bicycle was that I was logging time series data on an embedded system to a micro sd card and wanted to look at that data later. The other project used GPB to encode data before sending it out on our CAN network. GPB does add add a bit of overhead (floats take 6 bytes instead of 4), but for many integers it will effectively compress the data. There are also the CPU cycles taken to encode/decode messages. If network bandwidth and/or CPU cycles are at absolute premium, I would look into what is used in the auto industry: DBC files. There are some free editors out there and tools like Matlab have Simulink blocks to read/write these files and they are about as space & time efficient as you can hope for. A free DBC file editor is available from KVaser,[Note from Jack: this is an .exe and I have not checked it for threats] and there are paid versions available from Vector that allow fancier functionality. A simple example of how I used NanoPB is here. Richard Donkin had some advice: Jonathan mentioned the possibility of using Google's Protocol Buffers on embedded devices. While I have not used them myself, I have two cautions regarding using them over the serial port. The Protocol Buffers describe only a data encoding format, but this is not enough by itself when you are using a raw serial port. Firstly, you will need a framing protocol (i.e. delimiting protocol) at the lowest level to determine where one message ends and the other begins. I have found SLIP to be reliable and quite efficient for this (see http://www.ietf.org/rfc/rfc1055.txt). Secondly, you will need a mechanism to check for dropped or corrupted bytes; which occur easily if your microcontroller is too late in servicing an interrupt, or if your endpoints' baud rates are not well matched. I typically use a basic 16-bit cyclic redundancy check (CRC16) over the whole packet. If the CRC fails I drop the packet, and it is possible to request it to be re-sent if you cannot tolerate a dropped packet.



On Limited-Instruction Processors

Nick P, Security Engineer, responded to last issue's article about one-instruction processors: There is in fact an architecture with only one or two instructions that can do arbitrary computation: LISP processors [1]. Primitive old LISP's were just little memory cells linked together in tree's with an evaluator that walked the tree performing computation on it. A whole interpreter can be described in a page and a half plus implemented in hardware. The EVAL instruction was called to walk the tree. Also had garbage collector built-into memory hardware subsystem IIRC. Another interesting design came from research into software vs FPGA's vs ASIC hardware. The No Instruction Set Architecture eliminates instructions and instruction decoding altogether while keeping the parts of the pipeline that do work. You code in an imperative language (C), the application is compiled into low-level form, a datapath appropriate to it is generated, and the result integrated into something you can run on an FPGA. I'd like to see some embedded engineers play with it or combine it with real-time C tech to see how it might benefit them. [1] ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf



More On IDEs

Tim Wescott is not keen on the mbed toolchain I praised in the last Muse: This started out as "a quick comment" about the reaction that happened between your report on the mbed IDE and my prejudices. It turned into a rant. At any rate, here it is: Some of the worst problems that I've had to solve in the embedded space have been involved with trying to reconstruct old code, for the purpose of updating or fixing bugs in an existing software base. As such, I've gotten very paranoid about making sure that the entire software build process is under my control (or, by extension, my team). So I tend to view any IDE that wants to hide any detail of the build process with loathing. I want a build process that starts with me checking files out of version control, then has me typing "make" at a command line, and ends -- without further input from me -- at finished software. I want this build process to include no windowed or GUI-based software AT ALL (well, except for the window holding the command line). I'm happy to use an IDE like Eclipse or Brief (remember that one?) that runs make in a captive command line -- but that's the highest level of integration that I'll accept for serious work. Why? Because any other process puts me at the mercy of the IDE vendor. At best, the IDE will remain stable, but I will have a great deal of trouble figuring out which IDE-generated files are "true" source files, which files simply reflect the user's preferences, and which are temporary files that the IDE generates but does not erase when it closes. I'm sure that there are IDEs written to behave this way, but I have yet to get my hands on one that comes with a clear and simple document that explains what to archive and why. Worse, the IDE will mix essential project-build information in the same configuration file with user preferences. If someone wants blue 'for' statements and pink integer variables that's their choice -- but I don't want to have to archive that choice forever in the version control system, and I _really_ don't want to lose whatever mangled and ill-formed excuse for a makefile the IDE needs to build the code again. Worst of all is an IDE that "improves" things on updates by changing the formats of files, so that you lose all previous work if you want to use the latest and greatest. I understand that this happens with tool chain changes, too -- but it's a lot easier to archive an old tool chain, and an older all-command-line tool chain is a lot more likely to work on the latest version of your OS than some creaky and ancient GUI-based system. So you can imagine the degree of welcome I want to extend to an IDE that not only hides the build process from me, but that requires me to use a tool chain that is absolutely, positively, outside of my control, and that may disappear at any time on some corporate whim. At the bottom line, an IDE may be a nice toy for a baby software engineer to use as they learn to walk. It may even be a serious tool for a team that needs to churn out some consumer-oriented software base in less than a year and never ever use it again. But for serious software that needs to have a productive lifetime measured in years, the typical tightly integrated IDE is just product death in a pretty wrapper.

Nick P, in addition to his comments above, had this to say about IDEs: The most interesting IDE concept I've discovered is 001 Toolkit. I started reading this extremely fascinating paper [3] where they independently re-invent much of Comp Sci and eventually developed their "Universal System Language." The big claim is you can use it to model about anything, it semi-automates the design, totally automates code generation, automates most testing, can automate integration, supports legacy, full traceability, and so on. Many big claims setting off my snake oil alarm. Yet, the testimonials are just as big and from diverse people/applications [4]. Main site here [5]. So, I'm trying to get people with stronger background in abstract thinking, mathematics, and engineering than me to review it to see if USL is the silver bullet it claims to be. If they do, then someone can finally send Fred Brooks the rebuttal many dreamed of and new work in high assurance embedded field can build on USL. If not, the reviewers saved us a lot of time and we can put it on something better. Much thanks for anyone that takes up the challenge. [3] http://htius.com/Articles/r12ham.pdf [4] http://htius.com/What_Others_Say/testimonials.htm [5] http://htius.com/ John Canosa had a warning about mbed's IDE, which I covered in issue 276: Just thought I would give one important caveat to your readers regarding the mBed tools (which I think are great). If you are in business you need to pay very strict attention to the Terms and Conditions on the site: * You will be responsible for notifying to us and other users of the terms upon which you are permitting use of material that you have uploaded; such terms will always be subject to our rights to use the Data under the Agreement. The Data Usage Policy allows using Technical Data which is defined as:

* Content which you upload and create in both public and private areas of the Website, including but not limited to source code, object code and documentation. And the Usage of that Technical Data Allows (emphasis mine):

* To analyse materials you have uploaded and created for our own purposes and to collect and analyse information on how the Website is being used.

* To research and develop new services.

So essentially you are giving away the keys to any of your source code to ARM. That's a show stopper for many businesses that might want to fully develop proprietary applications and trade secrets using those tools.



Let me know if you’re hiring embedded engineers. No recruiters please, and I reserve the right to edit ads to fit the format and intents of this newsletter. Please keep it to 100 words. There is no charge for job ads.

Joke For The Week

Note: These jokes are archived at www.ganssle.com/jokes.htm. Charles Kwiatkowski sent this story: This actually happened in my PC repair class. In PC Repair, computer fluency is a must, but there is no pre-requisite for the class. So, on the first day of class, we go over the syllabus, and practice logging in. After going to the wrong room, complaining to the dean no-one was there, "Melvin" arrived late. After lecture we headed to the lab to log in and get signed up in Blackboard, our course management system. Walking students through logging in isn't usually a big deal, except for Melvin who had steam coming out of his ears as he kept trying and failing to log in. To keep the class moving along, I asked the lab assistant, Andy, to help him out. Andy: What are you using as your username?

Melvin: My K-number. (which is correct)

Andy: What are you using as your password?

Melvin: My student ID (which is correct)

Andy: Are you typing in your password correctly?

Melvin: Yeah... but it keeps putting these darn asterisks in! Thankfully, Melvin dropped the class after a week.



Advertise With Us

Advertise in The Embedded Muse! Over 23,000 embedded developers get this twice-monthly publication. For more information email us at info@ganssle.com.

About The Embedded Muse