You may redistribute this newsletter for noncommercial purposes. For commercial use contact info@ganssle.com. To subscribe or unsubscribe go to http://www.ganssle.com/tem-subunsub.html or drop Jack an email .

Did you know it IS possible to create accurate schedules? Or that most projects consume 50% of the development time in debug and test, and that it’s not hard to slash that number drastically? Or that we know how to manage the quantitative relationship between complexity and bugs? Learn this and far more at my Better Firmware Faster class, presented at your facility. See http://www.ganssle.com/onsite.htm .

Not really a quote, but I was struck by Toyota's recent $1.2 billion payout. With something like 1 million lines of code involved, that's around $1200/line, probably the most expensive software ever written. I wrote about it here , and one commenter sagely said "If you think that the cost of doing aircraft maintenance and certification is high, just look at the costs of dealing with the accident that happens if you don't."

It doesn't compare analog-wise to a real o'scope (60MHz filters on the front end -- and they're not very sharp ones at that)... but... I suppose that's when you call in Jake and Bruno.

And since the current apps run through a browser, maybe a touchscreen android tablet will make a better interface than usual for PC oscopes! And since it uses Linux for the grunt stuff... I'm making mine WiFi :)

Hopefully we get a Sump or other kind of logic-analyzer plugin soon (3.3V, so I might need to build some level shifters)... but since it's all open, if you want a new tool (eg a Scintillation Detector), you can build it ( https://www.kickstarter.com/projects/652945597/red-pitaya-open-instruments-for-everyone/posts/741013 )...

I've only had it for a short while, but... it's definitely got the beef in the guts for amazing things... 2 ch 14 bit 125 MS/s inputs 2 ch 14 bit 125 MS/s outputs 16 FPGA IOs for digital 4 input and 4 output low speed analog (12 bit 100 KS/s)

Thor Johnson has been playing with a Red Pitaya. This is a scope, logic analyzer and function generator on a single board. Their website is terrible and it's hard to find out much, but a press release last week from Elecktor claims it will cost "under $500." Thor's comments:

Just thought about being inclusive by adding gEDA to the list of development tools. For Macintosh folks, install FINK and through FINK install gEDA . The result is a very sophisticated, free, PCB Editor with features rivaling the high end products that you have to pay many hard earned coins for. gEDA also runs on Windows and many Linux platforms.

My article on debouncing contacts gets downloaded several thousand times per month. Trent Cleghorn did a nice implementation of the algorithms in C and C++, which is available here .

Please submit clever ideas or thoughts about tools, techniques and resources you love or hate. Here are the tool reviews submitted in the past.

All of your requested features are great! You might want to consider adding one additional feature: Some way, or a better way, to tell the compiler " hands off " this section of code and don’t ever change the current optimization for this section.

I've been programming since the early 80's and I still love Assembly language for one simple but very important reason. What you see is what you get! No tricks required. I could always estimate the timing AND once optimized I knew it would stay that way . There was no compiler black magic changing things simply because I made a change in some other section of code.

One statement that really struck home. If things are too slow, we can only make nearly random changes to the source and try again, hoping we're tricking the tool into generating better code.

It is impossible to say YES loud enough to the article “A Plea to Compiler Vendors” in Muse #257.

* -- Yes, gentle reader, YOUR problem is much, MUCH larger than can be dismissed with a simple hand-wave like that. But since "complicated" is a relative metric, you'd better keep reading anyway so that the guys solving those lesser problems can still benefit from your benevolent wisdom. Should be an easy read, since I'm just restating stuff YOU already know.

And I reject your notion that we have "0% accuracy" today. If a developer can't take an ASM dump of an AVR program's hot spots and Fermi out the approximate cycle cost, then they need to review those sections of the datasheet that they thought were too hard to understand. Their lack of inquisitiveness and desire for insight is risky for all of us.

But those smaller chips also generally get used to do smaller, more fully-defined work (*). I can totally see the value in having tools that help tabulate the code's expected behavior, but ... should the demand for a point-and-grunt to replace our need to really understand those well-defined works be viewed as a solution? I worry that, more often than not, it's really more of a recognition that we want to get too far away from our code to understand it OR the problem we're using it to solve.

I intentionally took the high road in my objection, I know that smaller chips like AVRs and the like are much more predictable.

If I wrote an application in assembly this wouldn't be too hard. Why can’t the tools be as smart as we are?

The data would be imperfect. But I’d kill for 80% accuracy instead of the 0% we have today.

But most embedded systems are small, using PICs, Atmels, Cortex-M, etc, which don’t have these fancy features. In most of those cases the toolchains are integrated – e.g., IAR and others provide the whole shooting match, from compiler to linker. They do preserve a lot of line-by-line info in the debugging file, so some sort of mapping from source to object is possible. Sure, optimizations mess things up but there’s still a ton of useful info because of C sequence points.

I agree that the issues are hard, and in some cases impossible. With cache there’s no predicting anything, and many safety-critical systems prohibit the use of it because there’s no guarantee things will run in a timely manner. And, yes, the tools would have to know everything about the execution environment, like wait states, clocks, etc. Load MPLab or a similar tool and you have to tell the debugger exactly which variant of a chip you’re using. Why not do the same at compile time, and add in other required parameters?

Function-level timing is meaningful, because C requires most of the cleanup work for a function to finish when the function returns. But per-line metrics aren't possible if you have a good toolchain.

I get what the author was asking for when he was demanding some metrics from his compiler. But a more productive request would be for compiler directives that prohibit certain classes of operations, like floating point operations, so that the developer knows he can't do something undesirable without getting a warning or error message. For the rest, he's just going to have to dig into the final assembly language himself if he wants the most-gory details.

All of the above is legal in C, because the rules are carefully written to care immensely about the final behavior of the code you write, but to not care at all about how the underlying machine implements your code.

Memory fetches are a great example. In a statement like "x = x + 1", the fastest assembly code won't try to increment X right after it reads it. Instead, it'll do something else for an instruction or two, so that the CPU core doesn't stall waiting on the raw value of X to arrive. In most cases, these one or two additional instructions are leftover work from the previous C statement, or setup work for the next one. Anything's fair game, as long as it doesn't involve memory or anything related to X.

Finally, many of the above optimizations actually reflow the opcodes associated with the original C code, to avoid "interlocks" that make sure a piece of data is present in the CPU core before it's operated on. The original "lines" of C code therefore no longer exist. This is a good thing, because avoiding interlocks is one of the the things you have to do in order to get efficient throughput from most modern CPUs, including perfectly ordinary ARM cores.

That number isn't all that useful, however, because in most toolchains the compiler hands the code off to the assembler and several layers of optimization. Those successive processing stages are likely to transform the compiler's original output significantly, so the compiler's original estimate doesn't matter anymore.

First, the compiler would have to know some pretty intimate details about the execution environment to estimate per-line calendar timing. In addition to knowing CPU instruction cycle counts, it'd have to know the input clock speed, memory bandwidth, cache hit/miss rates, and so on. Practically speaking, the compiler could tell you only the tally of instruction cycles based on the number of instructions it emitted.

This isn't as easy as it sounds, and I wouldn't trust the compiler's story even if it could tell it.

"Give us a compiler listing file that tags each line of code with expected execution times. Is that floating point multiply going to take a microsecond or a thousand times that? Tell me!"

A number of people responded to my call for compilers that give timing information. Bill Gatliff and I had a dialog about this. He wrote: