(C) 2009 Hank Wallace
This series of articles concerns embedded systems design and programming, and how to do it with excellence, whether you are new to the discipline or a veteran. This article is about a few myths that won’t die. The experts have learned to ignore the myths and just do their work, but many new engineers and academics are still being suckered.
There are numerous fads that come and go in every industry. Like the fashion industry, the embedded design universe suffers with these on a regular basis. Whether ubiquitous IrDA or $5 Bluetooth, only time helps us decide what has staying power and what does not.
For example, take code portability. Wouldn’t it be great to write a program for one processor and be able to run it on another processor? Sure! What’s required to do that? Well, every OS under which the program runs must present a 100% identical environment to the program, with respect to every feature of the programming language. Does this happen, ever? NO!
Learn this: The only portable program is one that performs no input or output, and executes no instructions.
There! Wasn’t that easy? That’s your recipe for portability. Why don’t one of you Knuth-heads out there send me a proof?
Fact is, the only way to prove that program X runs on machines A, B, C, through Z, is to run and test it on those machines.
I recently worked on a Java project. Java was chosen because of portability. Runs everywhere, right? WRONG! There are so many versions of Java and the virtual machine that the program had to be targeted to run for a small sliver of the environments in use. Totally pathetic! We should have simply written native PC and Mac versions. Java: Not Portable.
If any programming language could have been portable, it’s Java. But no. Effortless, testless code portability is the first myth.
If you want proof that C and C++ are not portable, just download any GNU program. I’ve never seen such gnarly preprocessor code as is written by the open source community. You’ll see layer after layer of #if statements and #defined constants, all to make a simple program run on five different machines. If the language were portable, that would not be necessary.
Need more evidence? I’ve run into various ANSI compilers which use differing precedence of operators, especially between logical and bitwise booleans. For example, the expression (‘A’+i&7) could be parsed as ((‘A’ + i) & 7) or (‘A’ + (i & 7)). Believe it or not, compilers may handle this differently. To make my programs more portable, I always parenthesize EVERYTHING. Impressing girls with your knowledge of operator precedence is a great thing at parties, but it’s detrimental on the job.
Next, let’s look at data hiding. You certainly don’t want your objects messing about in each other’s data structures. You wrote C programs years ago that adhered to this basic tenet, but that apparently wasn’t good enough for the academics. So you purchased a compiler to protect you from your own lazy impulses to poke a byte into the middle of someone else’s array.
Now you find that the entire OO world is gaga over data hiding, shielding from view the most basic internals of every object, and poorly documenting the stuff that’s exposed. If you use a preexisting framework such as MFC or .NET, you’re screwed. How does that object do its magic? “We’re not going to tell you.” What’s the status of that object? “We’re not going to tell you. We wrote it, and you didn’t, so stop asking!”
Folks, hiding data in one object from another is a useful tool. But hiding data and code from a cooperating programmer is stupid. If I’m going to use someone else’s framework, I need to know what’s happening inside. Why? Because the code generally has bugs or does not work as documented (if documented), and I’d like to at least know why to find a workaround.
There’s also the situation where you have the source code but there’s some information buried in the bowels of a lower level object. Getting at that information without recompiling the entire package is impossible. Procedurally, you could just write a function to fetch the information, but in the OO world, you are locked out. This is a huge waster of time and money, and as a business owner, I declare that to be unacceptable.
Another myth is the holy grail of code reuse. I started programming BASIC. The term code reuse had not been yet coined. Then Pascal came along. I was concerned about my thousands of lines of Pascal when employers demanded the shift to C. So I translated a lot of the Pascal to C. Then came C++, and I wrapped much of my procedural code in objects, being promised that code reuse would soon materialize. Now all that has been ‘deprecated’ in favor of C#, where you can’t create a string without wearing safety glasses.
The companies who propagate these tools care nary a bit about code reuse. I have written a million lines of code, and all of it works but I cannot reuse any of it. Soon the C# craze will fade and we’ll be on to C#++ or C++# or whatever, and we’ll be promised that code reuse is the reason.
Engineer, you can reuse every line of code you have written, but ONLY if you resist the fads. No program is portable, but the source you have written and tested can ease the programming tasks on the next project. Reuse that code. Recompile it and test it on the new platform. Don’t believe the hype of tool vendor fostered code reuse.
To make this easy, you should have a catalog of hardware and software designs to draw on. Do you have a repository of source code, with functions labeled appropriately? I needed a Dallas clock chip routine last month, and I did a simple search and found something close that I modified in a few minutes for the chip I was using. I have a master archive of all the source available to me, which I have authored, that makes such searches easy. Ditto with hardware, though it’s not so easy to GREP schematics.
Another myth is the benefit of OO design. The principles sure look good, but if you look inside the typical OO program you will find an OO framework loaded with a ton of procedural code. The truth is, people and programmers think in terms of functions, not objects and methods. So generally their programs look like procedural code stuffed into objects.
This results in objects that are not reusable because they are so function-specific and procedural.
Bubble busted. What’s a programmer to do?
Not a myth, but a pearls before swine situation is this: UML. UML is cool, but it’s a language only understood by the geeks. Management sees UML as a cost without tangible benefits. Management looks at UML as Unintelligible Moronic Lingo. When you talk use cases or state machines, your managers do not understand what you are talking about, nor do they care. What they care about is getting the product done on time and under budget.
Talking UML in a status meeting with marketing is like an auto mechanic explaining the guts of a transmission to granny. A decent tool, no doubt, but be careful not to worship it.
Keep these myths in mind, and don’t take them too seriously.
Hank Wallace is the owner of Atlantic Quality Design, Inc., a consulting firm located in Fincastle, Virginia. He has experience in many areas of embedded software and hardware development, and system design. See www.aqdi.com for more information.