Redirected from "Electrical circuits".

This is a draft of a never-completed paper by John Baez. Much of this, but not the part on cohomology, found its way into here:

John Baez and Brendan Fong, A compositional framework for passive linear networks.

Abstract

There is a dagger-compact category whose morphisms are equivalence classes of electrical circuits made of linear resistors. To construct this category, we begin by recalling work going back to Weyl which expresses Kirchhoff’s laws and Ohm’s law in terms of chains and cochains on a graph. We show that a ‘lumped’ circuit made of linear resistors—that is, a circuit of this sort treated as a ‘black box’ whose inside workings we cannot see—amounts mathematically to a Dirichlet form: a finite-dimensional real vector space with a chosen basis and a quadratic form obeying some conditions. There are rules for composing and tensoring Dirichlet forms, which correspond to the operations of composing circuits in series and setting circuits side by side. However, these rules do not give a category, because the would-be identity morphisms are made of wires with zero resistance, which fall outside the Dirichlet form framework. The most elegant solution is to treat Dirichlet forms as a special case of Lagrangian correspondences. This leads to a dagger-compact category of electrical circuits that can include wires with zero resistance.

Contents

Introduction

Basic Concepts

The concept of an electrical circuit made of linear resistors is well-known in electrical engineering, but we need to formalize it with more precision than usual. The basic idea is that an electrical circuit is a graph whose edges are labelled by positive real numbers called ‘resistances’, and whose set of vertices is equipped with two disjoint subsets: the ‘inputs’ and ‘outputs’.

Circuits

All graphs in this paper will be directed. So, define a graph to be a pair of functions s , t : E → V s,t : E \to V where E E and V V are finite sets. We call elements of E E edges and elements of V V vertices. We say that the edge e ∈ E e \in E has source s ( e ) s(e) and target t ( e ) t(e) , and we also say that e e is an edge from s ( e ) s(e) to t ( e ) t(e) .

Define an open graph to be a graph where the set of vertices is equipped with subsets V − V_- and V + V_+ , called inputs and outputs. We do not require that V − V_- and V + V_+ are disjoint. Often the difference between inputs and outputs will not matter, so we define ∂ V = V − ∪ V + \partial V = V_- \cup V_+ , and call elements of this set terminals.

Define a circuit (made of linear resistors) to be an open graph together with a function called the resistance

R : E → ( 0 , + ∞ ) R : E \to (0,+\infty)

assigning to each edge a positive real number. We will use Γ \Gamma to stand for a circuit:

Γ = ( s , t : E → V , V ± , R : E → ( 0 , + ∞ ) ) \Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)

Suppose we have another circuit

Γ ′ = ( s ′ , t ′ : E ′ → V ′ , V ′ ± , R ′ : E ′ → ( 0 , + ∞ ) ) \Gamma' = \left(s',t' : E' \to V', V'_{\pm}, R': E' \to (0,+\infty) \right)

Then there is an obvious notion of a map of circuits f : Γ → Γ ′ f : \Gamma \to \Gamma' . Such a map consists of a function sending vertices to vertices and a function sending edges to edges, both called f f :

f : V → V ′ f : V \to V'

f : E → E ′ f : E \to E'

which preserve sources and targets, inputs and outputs, and resistances:

s ′ ( f ( e ) ) = f ( s ( e ) ) , t ′ ( f ( e ) ) = f ( t ′ ( e ) ) s'(f(e)) = f(s(e)), t'(f(e)) = f(t'(e))

v ∈ V + ⇒ f ( v ) ∈ V ′ + v \in V_+ \implies f(v) \in V'_+

v ∈ V − ⇒ f ( v ) ∈ V ′ − v \in V_- \implies f(v) \in V'_-

R ′ ( f ( e ) ) = R ( e ) R'(f(e)) = R(e)

This definition makes circuits into the objects of a category.

Given any circuit Γ \Gamma , there are three other circuits we can build from it. They are all rather trivial, since they have no edges, only vertices. Nonetheless they are very important in what follows.

First, we have a circuit Γ + \Gamma_+ whose set of vertices is V + V_+ and whose set of edges is empty. We call this the input of Γ \Gamma . There is an obvious map of circuits

ι + : Γ + → Γ \iota_+ : \Gamma_+ \to \Gamma

coming from the inclusion V + ↪ V V_+ \hookrightarrow V and the inclusion ∅ ↪ E \emptyset \hookrightarrow E .

Similarly, there is a circuit Γ − \Gamma_- , called the output of Γ \Gamma , whose set of vertices is V − V_- and whose set of edges is empty. There is an obvious map

ι − : Γ − → Γ \iota_- : \Gamma_- \to \Gamma

Finally, there is a circuit ∂ Γ \partial \Gamma whose set of vertices is ∂ V \partial V and whose set of edges is empty. We call this the boundary of Γ \Gamma . Yet again there is an obvious map

ι : ∂ Γ → Γ \iota : \partial \Gamma \to \Gamma

Chain Complexes from Circuits

In 1923, Hermann Weyl published a paper in Spanish which described electrical circuits in terms of the homology and cohomology of graphs (W). In this approach, Kirchhoff’s voltage and current laws simply say that voltage is a 1-coboundary and current is a 1-cocycle. Furthermore, the electrical resistances labelling edges of the graphs put an inner product on the space of 1-chains, allowing us to identify them with 1-cochains. Ohm’s law then says that voltage may then be identified with the current.

In the late 1960’s and early 1970’s, these ideas were further developed by authors including Paul Slepian (Sl), G. E. Ching (C), J. P. Roth (R) and Stephen Smale (Sm). By now they are well-known. The textbook by Bamberg and Sternberg (BS) uses electrical circuits to motivate homology, cohomology and the beginnings of Hodge theory. The text by Gross and Kotiuga (GK) uses chain and cochain complexes to tackle a wide variety of problems in electromagnetism. What follows is a terse review of the basics.

Any circuit Γ \Gamma determines a chain complex of real vector spaces, C * ( Γ ) C_*(\Gamma) . As we shall see, a 1-chain in this complex can be used to describe the electrical current flowing through wires (that is, edges) of our circuit.

In fact, C * ( Γ ) C_*(\Gamma) is just the usual chain complex associated to a graph. So, it has only two nonzero terms:

C 0 ( Γ ) = ℝ V C_0(\Gamma) = \mathbb{R}^V

C 1 ( Γ ) = ℝ E C_1(\Gamma) = \mathbb{R}^E

with differential

∂ : C 1 ( Γ ) → C 0 ( Γ ) \partial : C_1(\Gamma) \to C_0(\Gamma)

given by

∂ ( e ) = t ( e ) − s ( e ) \partial(e) = t(e) - s(e)

We can make C * ( Γ ) C_*(\Gamma) a chain complex of finite-dimensional real Hilbert spaces, since the resistance R : E → ( 0 , + ∞ ) R : E \to (0,+\infty) defines an inner product on C 1 ( Γ ) C_1(\Gamma) by

⟨ e , e ′ ⟩ = R ( e ) δ e , e ′ \langle e, e' \rangle = R(e) \delta_{e,e'}

and there is also an inner product on C 0 ( Γ ) C_0(\Gamma) for which the vertices form an orthonormal basis:

⟨ v , v ′ ⟩ = δ v , v ′ \langle v, v' \rangle = \delta_{v,v'}

Cochain Complexes From Circuits

The dual of the chain complex C * ( Γ ) C_*(\Gamma) is a cochain complex of finite-dimensional real Hilbert spaces, C * ( Γ ) C^*(\Gamma) . As we shall see, a 1-cochain in this complex can be used to describe the voltage across wires of our circuit.

We call the differential in this cochain complex d d . It is is given by

( d ϕ ) ( e ) = ϕ ( t ( e ) ) − ϕ ( s ( e ) ) (d\phi)(e) = \phi(t(e)) - \phi(s(e))

But since any real Hilbert space is equipped with a canonical isomorphism to its dual, we get isomorphisms

r : C 0 ( Γ ) → C 0 ( Γ ) r: C_0(\Gamma) \to C^0(\Gamma)

r : C 1 ( Γ ) → C 1 ( Γ ) r: C_1(\Gamma) \to C^1(\Gamma)

Explicitly, these are given by:

(1) a ( β ) = ⟨ r ( a ) , β ⟩ a(\beta) = \langle r(a), \beta \rangle

where a ∈ C i ( Γ ) a \in C_i(\Gamma) and β ∈ C i ( Γ ) \beta \in C^i(\Gamma) .

Using these isomorphisms, we can transfer the differential ∂ \partial on C * ( Γ ) C_*(\Gamma) to a differential on C * ( Γ ) C^*(\Gamma) , which we call

d † : C 1 ( Γ ) → C 0 ( Γ ) d^\dagger : C^1(\Gamma) \to C^0(\Gamma)

In other words, we define d † d^\dagger so that this diagram commutes:

C 0 ( Γ ) ← ∂ C 1 ( Γ ) r ↓ ↓ r C 0 ( Γ ) ← d † C 1 ( Γ ) \array{ C_0(\Gamma) & \stackrel{\partial}{\leftarrow} & C_1(\Gamma) \\ r\downarrow && \downarrow r \\ C^0(\Gamma) & \stackrel{d^\dagger}{\leftarrow} & C^1(\Gamma) }

or in other words:

(2) d † r = r ∂ d^\dagger r = r \partial

We use the dagger notation because d † d^\dagger really is the Hilbert space adjoint of d d :

(3) ⟨ d † α , β ⟩ = ⟨ α , d β ⟩ \langle d^\dagger \alpha, \beta \rangle = \langle \alpha, d \beta \rangle

for all α ∈ C 1 ( Γ ) \alpha \in C^1(\Gamma) , β ∈ C 0 ( Γ ) \beta \in C^0(\Gamma) . This follows immediately from (1) and (2) if we choose a a with r a = α r a = \alpha :

⟨ d † α , β ⟩ = ⟨ d † r a , β ⟩ = ⟨ r ∂ a , β ⟩ = ( ∂ a ) ( β ) = ⟨ a , d β ⟩ \begin{array}{ccl} \langle d^\dagger \alpha, \beta \rangle &=& \langle d^\dagger r a, \beta \rangle \\ &=& \langle r \partial a , \beta \rangle \\ &=& (\partial a)(\beta) \\ &=& \langle a, d \beta \rangle \end{array}

The inclusion of circuits

ι : ∂ Γ → Γ \iota : \partial \Gamma \to \Gamma

gives an inclusion of chain complexes

ι * : C * ( ∂ Γ ) → C * ( Γ ) \iota_* : C_*(\partial \Gamma) \to C_*(\Gamma)

and then, by taking duals, a map of cochain complexes ι * : C * ( Γ ) → C * ( ∂ Γ ) \iota^* : C^*(\Gamma) \to C^*(\partial \Gamma) . Henceforth we call this map

(4) p : C * ( Γ ) → C * ( ∂ Γ ) p: C^*(\Gamma) \to C^*(\partial \Gamma)

This map is zero on 1-cochains, and on 0-cochains it simply amounts to restricting a function on the set of vertices V V to a function on the set of terminals.

Since we have cochain complexes of finite-dimensional real Hilbert spaces, we can also take the Hilbert space adjoint to get a map p † : C * ( ∂ Γ ) → C * ( Γ ) p^\dagger : C^*(\partial \Gamma) \to C^*(\Gamma) . We write this map as

(5) i : C * ( ∂ Γ ) → C * ( Γ ) i: C^*(\partial \Gamma) \to C^*(\Gamma)

This map is zero on 1-cochains, and on 0-cochains it simply amounts to extending a function on the set of terminals to a function on the set of vertices that is zero on the vertices that are not terminals.

The following standard facts will come in handy:

Proposition If the maps i , r , p i,r,p and ι * \iota_* are defined as above, then (6) i r = r ι * i r = r \iota_* (7) p i = 1 p i = 1 and (8) ( ker p ) ⊥ = im i (ker p)^\perp = im i

Proof FILL IN DETAILS. Since i = p † i = p^\dagger , Equation (8) follows from a general fact about a linear map T T between finite-dimensional Hilbert spaces: ( ker T ) ⊥ = im T (ker T)^\perp = im T .

Kirchhoff’s Laws

Given a circuit, we shall focus on two quantities: a 1-chain I ∈ C 1 ( Γ ) I \in C_1(\Gamma) called the current and a 1-cochain V ∈ C 1 ( Γ ) V \in C^1(\Gamma) called the voltage. In 1847, Gustav Kirchhoff formulated two laws governing these quantities.

We say Kirchhoff’s voltage law holds if

V = d ϕ V = d \phi

for some ϕ ∈ C 0 ( Γ ) \phi \in C^0(\Gamma) called the potential. If Kirchhoff’s voltage law holds for some voltage V V , the potential ϕ \phi is hardly ever unique. But we can say exactly how much it fails to be unique. Given ϕ 1 , ϕ 2 ∈ C 0 ( Γ ) \phi_1, \phi_2 \in C^0(\Gamma) , then d ϕ 1 = d ϕ 2 d \phi_1 = d \phi_2 if and only if their difference is constant on each connected component of the graph Γ \Gamma .

We say Kirchhoff’s current law holds if

(9) ∂ I = ι * J \partial I = \iota_* J

for some J ∈ C 0 ( ∂ Γ ) J \in C_0(\partial \Gamma) , called the boundary current. This says that the total current flowing in or out of any vertex is zero unless that vertex is a terminal. If Kirchhoff’s current law holds for I I , the boundary current J J is unique, since ι * : C 0 ( ∂ Γ ) → C 0 ( Γ ) \iota_* : C_0(\partial \Gamma) \to C_0(\Gamma) is one-to-one.

Ohm’s Law

In 1827 Georg Ohm published a book which included a relation between the voltage and current for circuits made of resistors (O). At the time, the critical reception was harsh: one contemporary called Ohm’s work “a web of naked fancies, which can never find the semblance of support from even the most superficial of observations”, and the German Minister of Education said that a professor who preached such heresies was unworthy to teach science (D,H). However, a simplified version of his relation is now widely used under the name of “Ohm’s law”.

As we have seen, the resistance lets us define an inner product on the vector space C 1 ( Γ ) C_1(\Gamma) , which gives an isomorphism r : C 1 ( Γ ) → C 1 ( Γ ) r: C_1(\Gamma) \to C^1(\Gamma) as defined in (1). We say Ohm’s law holds if the voltage V V and current I I are related as follows:

(10) V = r I V = r I

This allows us to express I I in terms of V V :

I = r − 1 V I = r^{-1} V

Kirchhoff’s voltage law then lets us write I I in terms of ϕ \phi :

I = r − 1 d ϕ I = r^{-1} d \phi

Given this, what does Kirchhoff’s current law say in terms of ϕ \phi ? The answer is this:

Proposition Kirchhoff’s current law holds for I = r − 1 d ϕ I = r^{-1} d \phi if and only if (11) d † d ϕ = i χ d^\dagger d \phi = i \chi for some χ ∈ C 0 ( ∂ Γ ) \chi \in C^0(\partial \Gamma) . Moreover, in this case we can take χ \chi to be given by (12) χ = r J \chi = r J where J J is the boundary current given by Kirchhoff’s current law.

Proof Assume Kirchhoff’s current law: ∂ I = ι * J \partial I = \iota_* J for some J J . Then we have (13) d † d ϕ = d † V = d † r I = r ∂ I = r ι * J = i r J d^\dagger d \phi = d^\dagger V = d^\dagger r I = r \partial I = r \iota_* J = i r J Here the first step uses Kirchhoff’s voltage law, the second uses Ohm’s law; the third uses (2), the fourth uses Kirchhoff’s current law, and the last step uses (6). Thus d † d ϕ = i χ d^\dagger d \phi = i \chi if we take χ = r J \chi = r J . Conversely, suppose d † d ϕ = i χ d^\dagger d \phi = i \chi . Then taking J = r − 1 χ J = r^{-1} \chi , the same sort of reasoning shows that d † d ϕ = i χ d^\dagger d \phi = i \chi .

The Principle of Minimum Power

In this section we always assume Kirchhoff’s voltage law and Ohm’s law.

Given a circuit Γ \Gamma with voltage V V and current I I , the power dissipated by a circuit is defined to be

P = V ( I ) P = V(I)

where we are pairing the 1-chain I I and the 1-cochain V V to get a real number. Ohm’s law allows us to rewrite I I as r − 1 V r^{-1} V , so the power can be expressed in terms of the voltage:

P = V ( r − 1 V ) = ⟨ V , V ⟩ P = V(r^{-1} V) = \langle V, V \rangle

Kirchhoff’s voltage law allows us to write V V as d ϕ d \phi , so the power can also be expressed in terms of the potential:

P = ⟨ d ϕ , d ϕ ⟩ P = \langle d \phi, d \phi \rangle

This expression lets us formulate the ‘principle of minimum power’, which gives us information about the potential ϕ \phi given its restriction to the boundary of Γ \Gamma . This restriction is an element of C 0 ( ∂ Γ ) C^0(\partial \Gamma) , and in general we call any element of this space a boundary potential.

Definition We say a potential ϕ ∈ C 0 ( Γ ) \phi \in C^0(\Gamma) obeys the principle of minimum power for a boundary potential ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) if ϕ \phi minimizes the power ⟨ d ϕ , d ϕ ⟩ \langle d \phi, d \phi \rangle subject to the constraint that p ϕ = ψ p \phi = \psi .

Proposition A potential ϕ \phi obeys the principle of minimum power for some boundary potential ψ \psi if and only if I = r − 1 d ϕ I = r^{-1} d \phi obeys Kirchhoff’s current law.

Proof If ϕ \phi obeys the principle of minimum power for some boundary potential ψ \psi , then for any ϕ ′ ∈ C 0 ( Γ ) \phi' \in C^0(\Gamma) with p ϕ ′ = 0 p \phi' = 0 we must have d d t ⟨ d ( ϕ + t ϕ ′ ) , d ( ϕ + t ϕ ′ ) ⟩ | t = 0 = 0 \left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0 or in other words: ⟨ d ϕ ′ , d ϕ ⟩ = 0 \langle d \phi', d \phi \rangle = 0 or ⟨ ϕ ′ , d † d ϕ ⟩ = 0 \langle \phi' , d^\dagger d \phi \rangle = 0 This means that d † d ϕ ∈ ( ker p ) ⊥ d^\dagger d \phi \in (ker p)^\perp , so by (8) we have d † d ϕ = i χ d^\dagger d \phi = i \chi for some boundary voltage χ \chi . By Proposition 2 this equation implies Kirchhoff’s current law for I = r − 1 d ϕ I = r^{-1} d \phi . Conversely, Kirchhoff’s current law for I I implies the above equation and thus, running the above calculation backwards, d d t ⟨ d ( ϕ + t ϕ ′ ) , d ( ϕ + t ϕ ′ ) ⟩ | t = 0 = 0 \left. \frac{d}{d t} \langle d(\phi + t \phi'), d(\phi + t \phi') \rangle \right|_{t = 0} = 0 It follows that ϕ \phi is a critical point for the power as a function on potentials satisfying the constraint p ϕ = ψ p \phi = \psi . But since the power is a nonnegative quadratic form, ϕ \phi must minimize the power among such potentials.

The Dirichlet problem

We have seen that a potential ϕ \phi gives a solution of all three basic equations governing electric circuits made from linear resistors—Kirchhoff’s voltage law, Kirchhoff’s current law and Ohm’s law—if and only if this equation holds:

(14) d † d ϕ = i χ d^\dagger d \phi = i \chi

Our next task is to solve this equation. But first, some remarks are in order.

The operator

d † d : C 0 ( Γ ) → C 0 ( Γ ) d^\dagger d : C^0(\Gamma) \to C^0(\Gamma)

acts as a discrete analogue of the Laplacian for the graph Γ \Gamma , so we call this operator the Laplacian of Γ \Gamma . Equation (14) is thus a version of Laplace’s equation with boundary conditions. It says the Laplacian of the potential ϕ ∈ C 0 ( Γ ) \phi \in \C^0(\Gamma) equals zero except on the boundary of Γ \Gamma , where it equals χ \chi .

We could try to solve for ϕ \phi given χ \chi . However, we prefer a slightly different approach, which emphasizes the role of the boundary potential ψ = p ϕ \psi = p \phi . After all, we have seen that ϕ \phi solves Equation (14) for some χ \chi if and only if ϕ \phi obeys the principle of minimum power for some boundary potential ψ \psi . We call the problem of finding a potential ϕ \phi that minimizes the power for a fixed value of ψ = p ϕ \psi = p \phi is a discrete version of the Dirichlet problem.

As we shall see, this version of the Dirichlet problem always has a solution. However, the solution is not necessarily unique. If we take a solution ϕ \phi and add to it some α ∈ C 0 ( Γ ) \alpha \in C^0(\Gamma) with d α = 0 d \alpha = 0 and p α = 0 p \alpha = 0 , we clearly get another solution. It should be intuitively clear that such an α \alpha is a function on the vertices of Γ \Gamma that is constant on each connected component and vanishes on the boundary of Γ \Gamma . To make this precise we need some standard concepts from graph theory:

Definition Given two vertices v , w v, w of a graph Γ \Gamma , a path from v v to w w is a finite sequence of vertices v = v 0 , v 1 , … , v n = w v = v_0, v_1, \dots , v_n = w and edges e 1 , … , e n e_1, \dots , e_n such that for each 1 ≤ i ≤ n 1 \le i \le n , either e i e_i is an edge from v i v_i to v i + 1 v_{i+1} , or an edge from v i + 1 v_{i+1} to v i v_i .

Definition A subset S S of the vertices of a graph Γ \Gamma is connected if for each pair of vertices in S S , there is a path from one to the other.

Definition A connected component of a graph Γ \Gamma is a maximal connected subset of the vertices of Γ \Gamma .

In the theory of directed graphs, the qualifier ‘strongly’ is commonly used before the word ‘connected’ in the last two definitions. However, we never consider any other sort of connectedness, so we omit this qualifier.

Definition A connected component of Γ \Gamma touches the boundary if it contains a vertex in ∂ Γ \partial \Gamma .

It easy to see that α ∈ C 0 ( Γ ) \alpha \in C^0(\Gamma) obeys d α = 0 d \alpha = 0 if and only if it is constant on each connected component of Γ \Gamma . If moreover p α = 0 p \alpha = 0 , then α \alpha must vanish on all connected components touching the boundary.

With these preliminaries in hand, we can solve the Dirichlet problem:

Proposition For any boundary potential ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) there exists a potential ϕ \phi obeying the principle of minimum power for ψ \psi . If we also demand that ϕ \phi vanish on every connected component of Γ \Gamma not touching the boundary, then ϕ \phi is unique, and depends linearly on ψ \psi .

Proof For existence, note that a nonnegative quadratic form restricted to an affine subspace of a real vector space must reach a minimum somewhere on this subspace. So, because the power ⟨ d ϕ , d ϕ ⟩ \langle d \phi, d \phi \rangle defines a nonnegative quadratic form on the space C 0 ( Γ ) C^0(\Gamma) , for any ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) the power must reach a minimum somewhere on the affine subspace X = { ϕ : p ϕ = ψ } . X = \{ \phi : p \phi = \psi \} . For uniqueness, suppose that ϕ , ϕ ′ ∈ X \phi, \phi' \in X both minimize the power. Let α = ϕ ′ − ϕ . \alpha = \phi' - \phi . Then p α = 0 p \alpha = 0 , so ϕ + t α \phi + t \alpha lies in X X for all t ∈ ℝ t \in \mathbb{R} . Thus, the function P ( t ) = ⟨ d ( ϕ + t α ) , d ( ϕ + t α ) ⟩ P(t) = \langle d(\phi + t \alpha), d(\phi + t \alpha) \rangle attains its minimum value both at t = 0 t = 0 and at t = 1 t = 1 . Since this function is smooth, we must have P ′ ( 0 ) = 0 P'(0) = 0 . Since P ( t ) = ⟨ d ϕ , d ϕ ⟩ + 2 t ⟨ d ϕ , d α ⟩ + t 2 ⟨ d α , d α ⟩ , P(t) = \langle d \phi, d \phi \rangle + 2 t \langle d \phi, d \alpha \rangle + t^2 \langle d \alpha, d \alpha \rangle, it follows that ⟨ d ϕ , d α ⟩ = 0 \langle d \phi, d \alpha \rangle = 0 . Thus P ( t ) = ⟨ d ϕ , d ϕ ⟩ + t 2 ⟨ d α , d α ⟩ . P(t) = \langle d \phi, d \phi \rangle + t^2 \langle d \alpha, d \alpha \rangle . Since this function takes on the same value at t = 0 t = 0 and t = 1 t = 1 , we must have d α = 0 d \alpha = 0 . This implies that α \alpha is constant on each connected component of Γ \Gamma . Furthermore, since p α = 0 p \alpha = 0 , α \alpha vanishes on each connected component of Γ \Gamma touching the boundary. Thus, if we demand that both ϕ \phi and ϕ ′ \phi' vanish on every connected component of Γ \Gamma that does not touch the boundary, α = ϕ ′ − ϕ \alpha = \phi' - \phi vanishes on every connected component of Γ \Gamma . It follows that ϕ = ϕ ′ \phi = \phi' , giving the desired uniqueness. To prove that ϕ \phi depends linearly on ψ \psi , suppose that for i = 1 , 2 i = 1,2 the potential ϕ i \phi_i obeys the principle of minimum power for ψ i \psi_i and vanishes on every component of Γ \Gamma not touching the boundary. Then by Propositions 2 and 4, we have d † d ψ i = i χ i d^\dagger d \psi_i = i \chi_i for some χ i ∈ C 0 ( ∂ Γ ) \chi_i \in C^0(\partial \Gamma) . It follows that for any real numbers c 1 c_1 and c 2 c_2 , the potential ϕ = c 1 ϕ 1 + c 2 ϕ 2 \phi = c_1 \phi_1 + c_2 \phi_2 obeys d † d ϕ = i χ d^\dagger d \phi = i \chi where χ = c 1 χ 1 + c 2 χ 2 \chi = c_1 \chi_1 + c_2 \chi_2 . By another application of Propositions 2 and 4, it follows that ϕ \phi obeys the principle of minimum power for some boundary potential ψ \psi . But since p ϕ = p ( c 1 ϕ 1 + c 2 ϕ 2 ) = c 1 ψ 1 + c 2 ψ 2 , p \phi = p(c_1 \phi_1 + c_2 \phi_2) = c_1 \psi_1 + c_2 \psi_2 , we must have ψ = c 1 ψ 1 + c 2 ψ 2 \psi = c_1 \psi_1 + c_2 \psi_2 . So, ϕ \phi depends linearly on ψ \psi .

Note from the proof of the above proposition that:

Proposition Suppose ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) and ϕ \phi is a potential obeying the principle of minimum power for ψ \psi . Then ϕ ′ \phi' obeys the principle of minimum power for ψ \psi if and only if the difference ϕ ′ − ϕ \phi' - \phi is constant on every connected component of Γ \Gamma and it vanishes on every connected component touching the boundary of Γ \Gamma .

Bamberg and Sternberg (BS) describe another way to solve the Dirichlet problem, going back to Weyl (W).

Lumped Circuits

In this section we always assume that the principle of minimum power holds, as well as Kirchhoff’s voltage law and Ohm’s law.

Under these circumstances, we shall see that the boundary potential determines the boundary current. A ‘lumped circuit’ is an equivalence class of circuits, where two are considered equivalent when the boundary current is the same function of the boundary potential. The idea is that the boundary current and boundary potential are all that can be observed ‘from outside’, i.e. by making measurements at the terminals. Restricting our attention to what can be observed by making measurements at the terminals amounts to treating a circuit as a ‘black box’: that is, treating its interior as hidden from view. So, two circuits give the same lumped circuit when they behave the same as ‘black boxes’.

First let us check that the boundary current is a function of the boundary potential. For this we introduce an important quadratic form on the space of boundary potentials:

Definition For any ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) , let Q ( ψ ) = 1 2 inf { ϕ : p ϕ = ψ } ⟨ d ϕ , d ϕ ⟩ Q(\psi) = \frac{1}{2} \inf_{\{\phi : p \phi = \psi\} } \; \langle d \phi, d \phi \rangle

Since ⟨ d ϕ , d ϕ ⟩ \langle d \phi, d \phi \rangle defines a nonnegative quadratic form on the finite-dimensional vector space C 0 ( Γ ) C^0(\Gamma) and the constraint p ϕ = ψ p \phi = \psi picks out a linear subspace of this subspace, the infimum above is actually attained. One can check that Q ( ψ ) Q(\psi) is a nonnegative quadratic form on C 0 ( ∂ Γ ) C^0(\partial \Gamma) .

Up to a factor of 1 2 \frac{1}{2} , Q ( ψ ) Q(\psi) is just the power dissipated by the circuit when the boundary voltage is ψ \psi , thanks to the principle of minimum power. The factor of 1 2 \frac{1}{2} simplifies the next proposition, which uses Q Q to compute the boundary current as a function of the boundary voltage.

Since Q Q is a smooth real-valued function on C 0 ( ∂ Γ ) C^0(\partial \Gamma) , its differential d Q d Q at any given point ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) defines an element of the dual space C 0 ( ∂ Γ ) C_0(\partial \Gamma) , which we denote by d Q ψ d Q_\psi . In fact, this element is equal to the boundary current J J corresponding to the boundary voltage ψ \psi :

Proposition Suppose ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) . Suppose ϕ \phi is any potential minimizing the power ⟨ d ϕ , d ϕ ⟩ \langle d \phi , d \phi \rangle subject to the constraint p ϕ = ψ p \phi = \psi . Let V = d ϕ V = d \phi be the corresponding voltage, I = r − 1 V I = r^{-1} V the current, and ∂ I = ι * J \partial I = \iota^* J where J J is the corresponding boundary current. Then (15) d Q ψ = J . d Q_\psi = J .

Proof Note first that while there may be several choices of ϕ \phi minimizing the power subject to the constraint that p ϕ = ψ p \phi = \psi , Proposition 8 says that the difference between any two choices vanishes on all components touching the boundary of Γ \Gamma . Thus, these two choices give the same value for J J . So, with no loss of generality we may assume ϕ \phi is the unique choice that vanishes on all components not touching the boundary. By Proposition 7, there is a linear operator f : C 0 ( ∂ Γ ) → C 0 ( Γ ) f: C^0(\partial \Gamma) \to C^0(\Gamma) sending ψ ∈ C 0 ( ∂ Γ ) \psi \in C^0(\partial \Gamma) to this choice of ϕ \phi , and then Q ( ψ ) = 1 2 ⟨ d f ψ , d f ψ ⟩ . Q(\psi) = \frac{1}{2} \langle d f \psi, d f \psi \rangle . Given any ψ ′ ∈ C 0 ( ∂ Γ ) \psi' \in C^0(\partial \Gamma) , we thus have (16) d Q ψ ( ψ ′ ) = d d t Q ( ψ + t ψ ′ ) | t = 0 = 1 2 d d t ⟨ d f ( ψ + t ψ ′ ) , d f ( ψ + t ψ ′ ) ⟩ | t = 0 = ⟨ d f ψ , d f ψ ′ ⟩ = ⟨ d † d f ψ , f ψ ′ ⟩ = ⟨ i r J , f ψ ′ ⟩ \begin{array}{ccl} d Q_\psi (\psi') &=& \left. \frac{d}{d t} Q(\psi + t \psi') \right|_{t = 0} \\ &=& \frac{1}{2} \left. \frac{d}{d t} \langle d f(\psi + t \psi'), d f (\psi + t \psi') \rangle \right|_{t = 0} \\ &=& \langle d f \psi, d f \psi' \rangle \\ &=& \langle d^\dagger d f \psi , f \psi' \rangle \\ &=& \langle i r J , f \psi' \rangle \end{array} where in the last step we use Equation (13). Since i † = p i^\dagger = p , we obtain (17) d Q ψ ( ψ ′ ) = ⟨ r J , p f ψ ′ ⟩ = ⟨ r J , ψ ′ ⟩ = J ( ψ ′ ) \begin{array}{ccl} d Q_\psi (\psi') &=& \langle r J, p f \psi' \rangle \\ &=& \langle r J , \psi' \rangle \\ &=& J(\psi') \end{array} where in the last step we use Equation (1). It follows that d Q ψ = J d Q_\psi = J .

Categories of Circuits

In this section we define a category of circuits, and also a category of lumped circuits. Both these are dagger-compact categories

There is a category where objects are finite sets of points, and a morphism f : S → T f : S \to T is an equivalence class of circuits

Γ = ( s , t : E → V , V ± , R : E → ( 0 , + ∞ ) ) \Gamma = \left(s,t : E \to V, V_{\pm}, R: E \to (0,+\infty) \right)

equipped with bijections

i : S → V − , j : T → V + . i: S \to V_-, \qquad j: T \to V_+ .

The equivalence relation is as follows: ( Γ , i , j ) (\Gamma, i, j) is equivalent to ( Γ ′ , i ′ , j ′ ) (\Gamma', i', j') if there is an isomorphism of circuits f : Γ → Γ ′ f : \Gamma \to \Gamma' such that

f i = i ′ , f j = j ′ . f i = i' , \qquad f j = j'.

The composition of circuits is given by pushout of cospans…

This category is symmetric monoidal, and in fact a dagger-compact category. WHY???

Dirichlet forms

We have seen that a lumped circuit is completely specified by the vector space C 0 ( ∂ Γ ) C^0(\partial \Gamma) along with its distinguished basis and the quadratic form Q Q . Now we describe which quadratic forms can arise this way. They are known as ‘Dirichlet forms’, and they admit a number of equivalent characterizations. We start with the simplest.

Given a finite set S S , let ℝ S \mathbb{R}^S be the vector space of functions ψ : S → ℝ S \psi: S \to \mathbb{R}^S . A Dirichlet form on S S will be a certain sort of quadratic form on ℝ S \mathbb{R}^S :

Definition Given a finite set S S , a Dirichlet form on S S is a quadratic form Q : ℝ S → ℝ Q: \mathbb{R}^S \to \mathbb{R} given by the formula Q ( ψ ) = ∑ i , j c i j ( ψ i − ψ j ) 2 Q(\psi) = \sum_{i,j} c_{i j} (\psi_i - \psi_j)^2 for some nonnegative real numbers c i j c_{i j} .

Note that we may assume without loss of generality that c i i = 0 c_{i i} = 0 and c i j = c j i c_{i j} = c_{j i} ; we do this henceforth. Any Dirichlet form is nonnegative: Q ( ψ ) ≥ 0 Q(\psi) \ge 0 for all ψ ∈ ℝ S \psi \in \mathbb{R}^S . However, not all nonnegative quadratic forms are Dirichlet forms. For example, if S = { 1 , 2 } S = \{1, 2\} :

Q ( ψ ) = ( ψ 1 + ψ 2 ) 2 Q(\psi) = (\psi_1 + \psi_2)^2

is not a Dirichlet form.

In fact, the concept of Dirichlet form is vastly more general: such quadratic forms are studied not just on finite-dimensional vector spaces ℝ S \mathbb{R}^S but on L 2 L^2 of any measure space. When this measure space is just a finite set, the concept of Dirichlet form reduces to the definition above. For a thorough introduction Dirichlet forms, see the text by Fukushima (F). For a fun tour of the underlying ideas, see the paper by Doyle and Snell (DS).

We will not really need any other characterizations of Dirichlet forms, but they do help illuminate the concept:

Proposition Given a finite set S S and a quadratic form Q : ℝ S → ℝ Q : \mathbb{R}^S \to \mathbb{R} , the following are equivalent: Q Q is a Dirichlet form. Q ( ϕ ) ≤ Q ( ψ ) Q(\phi) \le Q(\psi) whenever | ϕ i − ϕ j | ≤ | ψ i − ψ j | |\phi_i - \phi_j| \le |\psi_i - \psi_j| for all i , j i, j . Q ( ϕ ) = 0 Q(\phi) = 0 whenever ϕ i \phi_i is independent of i i , and Q Q obeys the Markov property: Q ( ϕ ) ≤ Q ( ψ ) Q(\phi) \le Q(\psi) when ψ i = min ( ϕ i , 1 ) \psi_i = \min (\phi_i, 1) .

Proof See Fukushima (F).

A Category of Lumped Circuits

We begin with a naive attempt to construct a category where the morphisms are lumped circuits. This naive attempt doesn’t quite work, because it doesn’t include identity morphisms. However, it points in the right direction.

Given finite sets S S and T T , let S + T S+T denote their disjoint union. Let D ( S , T ) D(S,T) be the set of Dirichlet forms on ℝ S + T \mathbb{R}^{S + T} . There is a way to compose these Diriclet forms:

∘ : D ( T , U ) × D ( S , T ) → D ( S , U ) \circ : D(T,U) \times D(S,T) \to D(S,U)

defined as follows. Given Q ∈ D ( S , T ) Q \in D(S,T) and R ∈ D ( T , U ) R \in D(T,U) , let

( R ∘ Q ) ( γ , α ) = inf β ∈ R T Q ( γ , β ) + R ( β , α ) (R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha)

where α ∈ R S , γ ∈ R U \alpha \in R^S, \gamma \in R^U . Moreover, this composition is associative:

( P ∘ Q ) ∘ R = P ∘ ( Q ∘ R ) (P \circ Q) \circ R = P \circ (Q \circ R)

However, there is typically no Dirichlet form 1 S ∈ D ( S , S ) 1_S \in D(S,S) playing the role of the identity for this composition. A ‘category without identity morphisms’ is called a semicategory, so we see

Proposition There is a semicategory where: the objects are finite sets,

a morphism from T T to S S is a Dirichlet form Q ∈ D ( S , T ) Q \in D(S,T) .

composition of morphisms is given by ( R ∘ Q ) ( γ , α ) = inf β ∈ R T Q ( γ , β ) + R ( β , α ) . (R \circ Q)(\gamma, \alpha) = \inf_{\beta \in \R^T} Q(\gamma, \beta) + R(\beta, \alpha) .

We would like to make this into a category. The easy way is to formally adjoin identity morphisms; this trick works for any semicategory. This amounts to introducing some circuits that contains wires with zero resistance. However, we obtain a better category if we include more morphisms: more circuits having wires with zero resistance.

References

S. Abramsky and B. Coecke, A categorical semantics of quantum protocols, in Proceedings of the 19th IEEE conference on Logic in Computer Science ( Li CS ? http://arxiv.org/abs/quant-ph/0402130.

P. Bamberg and S. Sternberg, A Course of Mathematics for Students of Physics 2, Chap. 12: The theory of electrical circuits, Cambridge University, Cambridge, 1982.

G. E. Ching, Topological concepts in networks; an application of homology theory to network analysis, Proc. 11th. Midwest Conference on Circuit Theory, University of Notre Dame, 1968, pp. 165-175.

B. Davies, A web of naked fancies?, Phys. Educ. 15 (1980), 57-61.

P. G. Doyle and J. L. Snell, Random Walks and Electrical Circuits, Mathematical Association of America, 1984. Also available at http://www.math.dartmouth.edu/~doyle/

M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.

P. W. Gross and P. R. Kotiuga, Electromagnetic Theory and Computation: A Topological Approach, Cambridge University Press, 2004.

I. B. Hart, Makers of Science, Oxford U. Press, London, 1923, p. 243.

P. Katis, N. Sabadini, R. F. C. Walters, On the algebra of systems with feedback and boundary, Rendiconti del Circolo Matematico di Palermo Serie II, Suppl. 63 (2000), 123–156.

J. Kigami, Analysis on Fractals, Cambridge U. Press. First 60 pages available at http://www-an.acs.i.kyoto-u.ac.jp/~kigami/AOF.pdf.

Z.-M. Ma and M. R&oum;ckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1991.

G. Ohm, Die Galvanische Kette, Mathematisch Bearbeitet, T. H. Riemann, Berlin, 1827. Also available at http://www.ohm-hochschule.de/bib/textarchiv/Ohm.Die_galvanische_Kette.pdf.

J. P. Roth, Existence and uniqueness of solutions to electrical network problems via homology sequences, Mathematical Aspects of Electrical Network Theory, SIAM-AMS Proceedings III, 1971, pp. 113-118.

C. Sabot, Existence and uniqueness of diffusions on finitely ramified self-similar fractals, Section 1: Dirichlet forms on finite sets and electrical networks, Annales Scientifiques de l’École Normale Supérieure, Sér. 4, 30 (1997), 605-673. Also available at http://www.numdam.org/numdam-bin/item?id=ASENS_1997_4_30_5_605_0.

C. Sabot, Electrical networks, symplectic reductions, and application to the renormalization map of self-similar lattices, Proc. Sympos. Pure Math. 72 (2004), 155-205. Also available as arXiv:math-ph/0304015.

P. Selinger, Dagger compact closed categories and completely positive maps, in Proceedings of the 3rd International Workshop on Quantum Programming Languages (QPL 2005), ENTCS 170 (2007), 139–163. Also available at http://www.mscs.dal.ca/~selinger/papers/dagger.pdf.

P. Slepian, Mathematical Foundations of Network Analysis, Springer, Berlin, 1968.

S. Smale, On the mathematical foundations of electrical network theory, J. Diff. Geom. 7 (1972), 193-210.

Y. Colin de Verdiere, Reseaux electriques planaires I, Comment. Math. Helv. 69 (1994), 351-374. Also available at http://www-fourier.ujf-grenoble.fr/~ycolver/All-Articles/94a.pdf.