This article originally appeared in Linear Audio, a book-format audio magazine published half-yearly by Jan Didden.

I hate articles titled “Ten … myths debunked.” I would have to start by listing a round number of clumsily worded claims by the non-feedback camp who probably never said any such thing, and juxtapose some simplified school-book explanations to put them right. And after shooting, flaying and roasting alive my straw men and generally hammering home that feedback doesn't work like that, I should then fail to explain why not. This would leave an excellent status quo where everyone has had their say and truths remain somewhere in the middle.

I want to do better than that. It's no proof of intelligence to open a debate pointing out the supposed “extremes” of the opinion spectrum and then taking some imagined middle ground. When person A says that 2+2=5 and person B says that 2+2=6, the most reasonable position to take is not five and a half. 2+2=5.5 can hardly be called moderate. In fact it is a very, very extreme claim. The author's reputation for strong opinions notwithstanding, you will find no such extreme claims in this article.

What is negative feedback (NFB)? “Feedback is an arrangement where an amplifier is made to respond to its own output signal in addition to the wanted input signal and any unwanted disturbances. When the response to an unwanted disturbance is smaller with feedback applied than without it, we call it negative feedback.”

Outside of audio, the subject is covered by an extremely comprehensive discipline called control theory, which deals with negative feedback around just about any process to keep it more, or even at all, stable. At the peril of stating the obvious, here's the simplified example of a feedback loop as one might find it in an audio amplifier:

Figure 1: Simplified feedback loop diagram.

In Figure 1, s is the complex frequency 2πjf. B(s) is usually a constant, namely the attenuation of the feedback network written as a gain of 1 or less. Unfortunately we can't make A(s) frequency independent. Any physical system has inertia. Ignoring this will make the system overreact catastrophically. We need to mitigate the initial response to an error; otherwise we'll treat the inertia as an error as well and try to fight it. On the other hand we don't need to under-respond in the long run. The simplest workable control function is an integrator.

The second summing node adds unwanted signals like distortion, represented as ε. This representation is correct whether or not the error is signal-dependent by the way. If ε is signal dependent we can extend the model to reflect this, but that's not necessary to get a basic understanding of the issue. For now we'll treat ε as if it were a completely independent signal and come back to it later. Working our way through from back to front we can write the equation for y as:

y = ε + A(s) • (x + y • B(s))

Solving for y:

The output signal has two contributors, the input signal and the errors, and both are affected by different factors which we can call the Signal Transfer Function and the Error Transfer Function respectively:

y = ETF(s) • ε + STF(s) • x

The siren song of negative feedback is that maximising A makes the ETF approach zero and makes the STF approach -1/B(s). The quantity A(s) • B(s) is called “loop gain.” This is the more precise term for what we loosely call “the amount of negative feedback.” Let's take a hypothetical audio amplifier with exactly one integrator (see Box 1) and plot the various functions and values in Figure 2.

Figure 2: Relationships between gain and loop functions in a feedback loop.

Higher-order loops

The 20dB/decade gain slope becomes quite limiting if you want to have really good audio performance. Suppose you want to achieve 0.001% THD at 20kHz and your power stage's open-loop distortion is a third harmonic at 0.3%. The obvious thing to do would be having 300 times = 50dB loop gain at 3x20kHz = 60kHz. A unity gain turnover point (more or less the closed-loop bandwidth) of 18MHz would do. The challenge would be futile only if the plan was to succeed.

Linearizing the output stage first looks more promising. What is sometimes called linearizing is really the use of some form of local feedback.

Figure 3: “Linearizing” the output stage.

This (Figure 3) is called a “nested loop.” For simplicity's sake I'm presuming that we're making an amplifier with a closed-loop gain of 1 and that the two control blocks are perfect integrators – that is, their frequency response is a 20dB/decade slope that continues all the way to DC. Once you get the hang of it you can insert more accurate transfer functions.

So A/s and B/s are integrators, the constant factors A and B being two times their respective gain-bandwidth products. The unity gain frequency of the global loop would be set a good bit lower than the closed-loop bandwidth of the local loop. In this particular example this simply means A< We can write the system as: Solving for y: Without giving it much thought we've just constructed a feedback loop with second-order behaviour (Figure 4). For low frequencies (remember, s=2πjf), what remains of the error is now directly proportional to the square of frequency. For every halving of frequency, distortion drops by a factor four. Figure 4: Loop functions with the added “linearizing” of the output stage.

A normal three-stage amplifier with local feedback around the power stage would be a suitable implementation. Note that ETF(s) shows only what happens to the distortion contribution of the power stage. Errors outside the local loop are corrected only by A/s and errors outside the global loop are not corrected at all.

Another method of constructing a higher-order loop is cascading several stages. There's an old rumour about amplifiers constructed of several stages to make a lot of gain that were then compensated to death so as to makethe loop stable again. The necessary compensation capacitor, they say, is then so large that you have no more slew rate left.

It’s subtler than that. Doing it like that just makes a perfectly normal first-order loop response and normal slew rate, albeit with spectacularly improved gain at 0.01Hz if fancy took you there. It’s just another way of ineffectively building something so impressively complicated that having gotten it to work at all makes you feel all warm and fuzzy inside.

Getting improved loop gain from a cascade of integrators is done like this (Figure 5):

Figure 5: Cascading integrators to improve loop gain.

Each intermediate result is fed forward to a summing node before the error source. There is no local loop around the output stage. This is one global loop.

Algebraically the loop does this:

Solving for y:

Compare this with the result for the nested loop (eq. 1). The signal transfer function is different but the error transfer function is exactly the same. From the viewpoint of distortion, there is no difference whatsoever between one macho feedback loop and local feedback with a simpler global loop around it.

Designers who propose to use “mostly local feedback and only a little global feedback” are labouring under an illusion. It makes no difference. Whether you choose to use a nested loop or global feedback depends on other practicalities but has no bearing at all on actual audio performance.

I should add that this cascaded, or “state variable” form doesn't lend itself to adaption to the three-stage amplifier circuit. A second-order loop that does is the C-R-C “T” compensation network. Let C be twice the value of the compensation capacitor you're replacing and make R (referred to ground) as low as you can without affecting stability.

An alternative distortion reduction scheme is called “error correction.” What it does is measure the difference between wanted output (i.e., the input) and actual output and then subtract this value from the input (Figure 6).

Figure 6: Error correction scheme.

Intuitively this should be ideal but we have to keep in mind that the power stage isn't infinitely fast. For simplicity the power stage is represented by nothing but an adder and an error source, but in reality the correction signal applied to the input does not immediately result in a change of the output. The lack of response shows up in the measured error and would result in an even stronger correction signal. To prevent this, the error correcting loop has a low-pass filter inserted with a corner frequency of B/(2π).

Rewriting the diagram algebraically yields:

Solving for y in turn tells us that:

The signal transfer function now only depends on A and the error transfer function is a bit different in form, but again we get a second-order function with only s2 in the numerator. Error correction is not fundamentally different from normal feedback. It adds one order of loop gain in the same way as local feedback does.

The choice for one of these options follows from practical matters such as circuit simplicity but certainly not audio performance. There is no reason to expect an amplifier using this technique to sound different from another that realises similar loop gain using global or nested feedback.

Coming up in Part 2: The controversy.

About the author

An electrical engineer, Bruno Putzeys has been active since 1995 designing class D power amplifiers for high-end and consumer audio applications, digital and analogue delta-sigma and PWM modulators, AD/DA converters and discrete solid-state analogue audio circuits. He has further experience with vacuum tube electronics and loudspeaker design. Now working for Hypex, he was formerly employed at Philips where he invented, among others, the “UcD” power amplifier circuit.