Embedded System Design Library: Interrupts

(C) 2009 Hank Wallace

PREVIOUS – Embedded System Design Library: Defensive Programming
NEXT – Embedded System Design Library: Complexity

This series of articles concerns embedded systems design and programming, and how to do it with excellence, whether you are new to the discipline or a veteran. This article is about the proper use of interrupts and how to write and debug interrupt driven code.

Sometimes I read articles in engineering journals and newsgroups warning of the dangers of interrupts, some eschewing use of interrupts altogether. These make me feel like an Interrupts Anonymous inductee: “Hi. My name is Hank. I use interrupts.”

Yes, it is true. Since age 18, I have been a heavy interrupt user, folding much program functionality into large, intricate interrupt routines without regard for the addictive side effects and potential withdrawal symptoms. Disregarding the cautions of other recovering programmers, I attacked each program with an aim to utilize as much of the processor’s resources as I could, to make the problem solution as efficient as possible, and that led me down the dark road to interrupt dependence.

Have you been warned about interrupts? Too difficult to write correctly. Hard to debug. Full of strange side effects. They make processor loading a function of external stimulus. Tough to emulate. Well, hold your nose, because that’s a bunch of hooey!

Interrupts are a great resource available on most all processors used in embedded systems. They are not to be feared; don’t listen to the fainthearted. To be competitive, you must be bold and use all the tools at your disposal, including interrupts!

Why Interrupts?

Just what do interrupts do for us? First, they permit seemingly parallel execution of multiple programs, processes, or tasks. As an extreme example, I wrote a multitasking kernel several years back for the 64180 CPU. It is time slicing and driven by a timer interrupt. Each program I have written using the kernel runs at the behest of a timer interrupt. It could be said that the entire program runs out of an interrupt service routine (ISR). Externally, I perceive the operation of the tasks to be concurrent, and this turns out to be an acceptable programming model for events and actions which are several times longer than the time slice duration. That is, the timer interrupt provides practical concurrency, which makes my programming job easier. To add another major product function, I just add another task.

The second benefit of interrupts is that they allow peripherals (on- or off-chip) to request service when needed, without wasting CPU cycles on polling. Many small programs can be written to use a main loop that polls peripherals such as serial ports or A/D converters for data on a continuous basis. However, when the mix of asynchronous and synchronous, periodic and aperiodic devices becomes more complex, the solution is easier to think about if each peripheral can simply request service when needed. This allows the background process to run without being a slave to I/O devices. And if there are enough CPU cycles to do the work in polled mode, there will almost always be enough to do the work using interrupts, else you budgeted the CPU time too tightly.

Related to the second benefit, most embedded system programs have hard real time deadlines to meet regarding the collection of data. But once the data is collected, the timing constraints are usually relaxed. For example, a program receiving commands through a serial port must service the port in a timely manner in order not to miss any bytes in the command, and this is done by an interrupt routine. But once the entire command is received, the system typically has some time to perform the action and provide a response, and that is done by the background task.

These two benefits, concurrency and service on demand, are valuable and should not be ignored.

Is it possible to write complex systems without using interrupts? Sure! But who wants to, besides Windows programmers? When I buy a processor with several internal peripherals, each with interrupt capability, what is the sense in writing a Super Loop program and counting machine cycles and T-states to ensure each peripheral will be serviced adequately? And when the program is expanded by someone else, is that programmer going to take the same care to ensure all deadlines will be met? Not a chance.

Here’s the beef: For all but the most trivial programs and programmers, interrupts actually make your job *easier*. But like any tool, you have to know how to use it without losing any limbs.

A Universe of Possibilities

Surveying several programs I have written over the years, they all use interrupts, except for those written for processors without interrupt capability, like some Microchip PIC series processors. I have used some 4-bit processors with excellent interrupt handling capability.

When I write a program, one of the first things I do is dissect the problem, identifying parts which can be simplified and handled transparently by processor interrupt hardware and interrupt routines. Just about every program that does something more than electronic dice can benefit from the intelligent use of interrupts.

Let’s look at some examples of interrupt routines, great and small, and see what the possibilities are.

There is a myth that interrupt routines must be small to be efficient. This is hogwash. However, the core assumption of that requirement is that small routines execute more quickly. This is sometimes the case. The real focus should be to make the routines fast, regardless of size.

As an example, I have an ISR in a communications product which performs DTMF and pulse dialing, and DTMF detection. The main program depends on a VLSI modem device to perform DTMF filtering and detection, but the modem does not time, or rather debounce, the received tones, leaving that to software. Thus, a timer- driven interrupt routine reads the modem, qualifies the tone detection bits, and loads decoded tones into a buffer for the background task. Things are complicated by the fact that some tones are required to have longer detect times than others to prevent falsing of important single-tone-activated functions.

The DTMF and pulse dialing functions are performed as would be expected, by counting timer interrupts in certain patterns.

This interrupt routine is about 600 lines long. Six hundred lines? How is it possible to execute a 600 line interrupt routine every 20 milliseconds on an eight bit processor and have any time left over to fold the laundry?

The secret is the use of state machines. If the actions taken by the ISR are rendered in state machine form, it is easy to see that only a small fraction of the ISR (one state) executes upon any invocation. When performing DTMF dialing, for example, the timer interrupt invokes the ISR, the ISR checks to see what the current DTMF tone length count is, makes a decision as to whether to turn the tone off, and returns. A tone is typically five to ten timer interrupts long. If the tone needs to be turned on or off, the ISR does that and changes the state control variable so that the next invocation will execute the next step in the process. It is simple and fast.

In this way, a large interrupt routine can be written to execute complex algorithms in a state driven manner while still running quite fast, using little CPU time, but adding much functionality while removing a time-critical responsibility from the background task.

Judging Interrupt Latency

Interrupt latency is an important issue. This is a measure of the time between activation of the interrupt request by a peripheral and servicing of the request by the processor. Many factors affect latency including the length of atomic instructions in the instruction set of the CPU, the particular method a CPU employs to switch contexts (such as automatic register saves), clock frequency, and whether interrupts are ever disabled by the background task and for how long.

Analyzing a program to compute maximum latency is conceptually easy but practically difficult because of all the possibilities involved. One would have to account for CPU speed, the maximum time interrupts are delayed during instruction execution, the interrupt calling mechanism delays, the receipt of other interrupts, use of instruction caches, and the placement of interrupt disable instructions in the code.

An easier method for embedded systems programmers to judge latency is a direct measurement. A dual trace oscilloscope is a fine tool for this. Analog scopes will give you a relative degree of incidence of various latency times due to the natural modulation of the intensity of the trace. Brighter traces indicate more frequent latency values. Digital scopes have a greater chance of capturing low repetition peak latency waveforms which may be missed on an analog scope.

The basic idea is to measure the time between the active edge of the interrupt waveform or event and a pulse produced by the ISR near its start. A spare output bit is required to sense the start of the ISR. Triggering on the start of the interrupt waveform, the delay to the ISR’s “I’m here” pulse gives a good indication of the latency.

However, it is the peak latency which is of prime importance, not the average. An analog scope will show you something akin to the average latency. A digital scope can generally be set to store and display multiple waveforms until it is reset, and this can reveal the peak latency, if the code is exercised to the fullest extent possible.

But what latency value is a good one? That depends on the source of the interrupts. If the CPU is servicing a FIFO-less asynchronous serial port, there is generally a one or two character time delay permissible to read a character before an overrun occurs. At 9600-N-8-1, that amounts to 1.042ms. Your ISR must be invoked, read and buffer the character, adjust buffer pointers, and release the CPU in at least 1.042ms, preferably much less if you want the CPU to be able to process the characters! Knowing the latency of your CPU, program, and interrupt system will allow you to estimate the maximum throughput of the program, because the latency is a relatively constant penalty that is paid at each interrupt.

Judging Interrupt Routine Cycle Consumption

Likewise it is good to know how long an ISR takes to execute. Computation of this directly from the source or object code is theoretically possible, but for those of use without analysis tools and top of the line emulators, a more down to earth approach is appropriate. It is important to know the maximum time of execution of an ISR because other interrupts are sometimes disabled for the duration of the ISR, increasing the latency of those other interrupts.

An interesting side effect of using an “ISR active” bit is that it can be used to measure percentage of CPU cycle consumption by the ISR in an analog fashion. Filtering the waveform with a simple RC circuit allows use of a DC voltmeter or scope to measure the signal. The ISR CPU cycle consumption is then the measured voltage divided by the maximum voltage of the output bit, typically 3.3 or 5 volts. Even a simple analog meter movement can give a ballpark indication of CPU usage.

Interrupt Gotchas

Programming interrupts in high level languages is rather common these days, but there are some potential traps you should be aware of.

The first trap is that C/C++ compilers will have some functions in their libraries which are not reentrant. So if you call a routine from the main code path, and an interrupt occurs which calls the same routine, there will be issues if the routine uses static data storage. I’ve seen this with many compilers, and not just with named library functions. One compiler performed 8-bit and 16-bit math with full reentrancy, but 32-bit math was not reentrant, causing all manner of grief. You don’t know until you test the library which functions are not reentrant. I had to scrap one compiler entirely because of this problem.

You also should code for all interrupt vectors on the processor. Point all interrupt vectors either to working routines or a dummy routine that records the occurrence of the interrupt (that likely should not have occurred). Exception vectors should always be trapped and handled appropriately.

Spurning Interrupt Phobia

I have known several good engineers who had a burning desire to learn assembly language programming. Each asked me how to get started. The only answer is: Get started! There is no way to write assembly language programs without generating very subtle program bugs which take hours of detailed study of the processor’s guts to find and kill. After all, the best emulator is sitting between your ears.

The same idea is true of interrupts. If you know assembly language, but shy away from interrupts because of the potential problems, you must embrace and conquer those problems to be competitive. After you spend several late nights in the lab with your prototype connected to an emulator and scope or logic analyzer, like a patient in intensive care, you will find the problems, fill the gaps in your understanding and have one more powerful tool for creating simple solutions to complex problems.

Author Biography

Hank Wallace is the owner of Atlantic Quality Design, Inc., a consulting firm located in Fincastle, Virginia. He has experience in many areas of embedded software and hardware development, and system design. See www.aqdi.com for more information.