...

document

by user

on
Category: Documents
1

views

Report

Comments

Description

Transcript

document
MODELING COMPUTATIONAL NETWORKS BY
TIME-VARYING SYSTEMS
Alle-Jan van der Veen and Patrick Dewilde
Delft University of Technology, Dept. of Electrical Engineering
Mekelweg 4, 2628 CD Delft, The Netherlands
ABSTRACT
Many computational schemes in linear algebra can be studied from the point of view of (discrete) timevarying linear systems theory. For example, the operation ‘multiplication of a vector by an upper triangular
matrix’ can be represented by a computational scheme (or model) that acts sequentially on the entries of the
vector. The number of intermediate quantities (‘states’) that are needed in the computations is a measure of
the complexity of the model. If the matrix is large but its complexity is low, then not only multiplication, but
also other operations such as inversion and factorization, can be carried out efficiently using the model rather
than the original matrix. In the present paper we discuss a number of techniques in time-varying system
theory that can be used to capture a given matrix into such a computational network.
Keywords: computational linear algebra models, model reduction, fast matrix algorithms.
1. INTRODUCTION
1.1. Computational linear algebra and time-varying modeling
In the intersection of linear algebra and system theory is the field of computational linear algebra. Its purpose is to find efficient algorithms for linear algebra problems (matrix multiplication, inversion, approximation). A useful model for matrix computations is provided by the state equations that are used in dynamical
system theory. Such a state model is often quite natural: in any algorithm which computes a matrix multiplication or inversion, the global operation is decomposed into a sequence of local operations that each act on a
limited number of matrix entries (ultimately two), assisted by intermediate quantities that connect the local
operations. These quantities can be called the states of the algorithm, and translate to the state of the dynamical system that is the computational model of the matrix operation. Although many matrix operations can
be captured this way by some linear dynamical system, our interest is in matrices that possess some kind of
structure which allows for efficient (“fast”) algorithms: algorithms that exploit this structure. Structure in
a matrix is inherited from the origin of the linear algebra problem, and is for our purposes typically due to
the modeling of some (physical) dynamical system. Many signal processing applications, inverse scattering
problems and least squares estimation problems give structured matrices that can indeed be modeled by a
low complexity network.
Besides sparse matrices (many zero entries), traditional structured matrices are Toeplitz and Hankel matrices, which translate to linear time-invariant (LTI) systems. Associated computational algorithms are wellknown, e.g., for Toeplitz systems we have Schur recursions for LU- and Cholesky factorization [1], Levinson
recursions for factorization of the inverse [2], Gohberg/Semencul recursions for computing the inverse [3],
and Schur-based recursions for QR factorization [4]. The resulting algorithms have computing complexity
of order n2 for matrices of size (n × n), as compared to n3 for algorithms that do not take the Toeplitz
0 Integration, the
VLSI Journal, 16(3):267-291, December 1993.
structure into account. Generalizations of the Toeplitz structure are obtained by considering matrices which
have a so-called displacement structure [5, 6, 7, 8]: matrices G for which there are (simple) matrices F1 , F2
such that
G − F1∗GF2
(1)
is of low rank, α say. Such a matrix occurs e.g., in stochastic adaptive prediction problems as the covariance
matrix of the received stochastic signal; the matrix is called α-stationary. An overview of inversion and
factorization algorithms for such matrices can be found in [9].
In this paper, we pursue a complementary notion of structure which we will call a state structure. The state
structure applies to upper triangular matrices and is seemingly unrelated to the Toeplitz structure mentioned
above. A first purpose of the computational schemes considered in this paper is to perform a desired linear
transformation T on a vector (‘input sequence’) u,
u1
u
u2
···
un
with an output vector or sequence y uT as the result. The key idea is that we can associate with this matrixvector multiplication a computational network that takes u and computes y, and that matrices with a sparse
state structure have a computational network of low complexity so that using the network to compute y is
more efficient than computing uT directly. To introduce this notion, consider an upper triangular matrix T
along with its inverse,
T
1 1 2 1 6 1 24
1 1 3 1 12
1
1 4
1
T
−1
1 −1 2
1
−1 3
1
−1 4
1
The inverse of T is sparse, which is an indication of a sparse state structure. A computational network
that models multiplication by T is depicted in figure 1(a), and it is readily verified that this network does
indeed compute y1 y2 y3 y4
u1 u2 u3 u4 T by trying vectors of the form 1 0 0 0 up to 0 0 0 1 .
The dependence of yk on ui , i k , introduces intermediate quantities xk called states. At each point k the
processor in the stage at that point takes its input data uk from the input sequence u and computes a new
output data yk which is part of the output sequence y generated by the system. The computations in the
network are split into sections, which we will call stages, where the k-th stage consumes uk and produces
yk . To execute the computation, the processor will use some remainder of its past history, i.e., the state xk ,
which has been computed by the previous stages and which was temporarily stored in registers indicated by
the symbol z. The complexity of the computational network is equal to the number of states at each point.
A non-trivial computational network to compute y uT which requires less states is shown in figure 1(b).
The total number of multiplications required in this network that are different from 1 is 5, as compared to 6
in a direct computation using T. Although we have gained only one multiplication here, for a less moderate
example, say a (n × n) upper triangular matrix with n 10000 and d n states at each point, the number of
multiplications in the network can be as low as 8dn , instead of 1 2 n2 for a direct computation using
T. Note that the number of states can vary from one point to the other, depending on the nature of T. In the
example above, the number of states entering the network at point 1 is zero, and the number of states leaving
the network at point 4 is also zero. If we would change the value of one of the entries of the 2 × 2 submatrix
in the upper-right corner of T to a different value, then two states would have been required to connect stage
2 to stage 3.
2
u1
u2
1
u3
z
1
x3
1
z
x2
1
z
1/3
1/12
z
z
1/6
1/24
y1
y2
y3
y4
u1
u2
u3
u4
1
1/2
z
1
1/3
x2
1/3
z
1
1/4
x3
1
(b)
x4
z
1/4
1/2
(a)
u4
y1
1/4
1
z
x4
1
y2
1
y3
y4
Figure 1. Computational networks corresponding to T. (a) Direct (trivial) realization, (b) minimal realization.
The computations in the network can be summarized by the following recursion, for k
y
xk
yk
⇔
uT
1
xk A k
xkCk
1 to n:
u k Bk
uk Dk
(2)
or
xk 1
yk
xk
uk Tk
Tk
Ak
Bk
Ck
Dk
in which xk is the state vector at time k (taken to have dk entries), Ak is a dk × dk 1 (possibly non-square)
matrix, Bk is a 1 × dk 1 vector, Ck is a dk × 1 vector, and Dk is a scalar. More general computational networks
can have the number of inputs and outputs at each stage different from one, and possibly also varying from
stage to stage. In the example, we have a sequence of realization matrices
T1
1· 2
·
1
T2
11 33
1
1
T3
11 44
1
1
T4
· 1
· 1
where the ‘·’ indicates entries that actually have dimension 0 because the corresponding states do not exist.
The recursion in equation (2) shows that it is a recursion for increasing values of k: the order of computations
in the network is strictly from left to right, and we cannot compute yk unless we know xk , i.e., unless we have
processed u1 · · · uk−1 . On the other hand, yk does not depend on uk 1 · · · un . This is a direct consequence of
the fact that T has been chosen upper triangular, so that such an ordering of computations is indeed possible.
1.2. Time-varying systems
A link with system theory is obtained when T is regarded as the transfer matrix of a non-stationary causal
linear system with input u and output y uT. The k-th row of T then corresponds to the impulse response
of the system when excited by an impulse at time instant i, that is, the output y due to an input vector u
3
0 · · · 0 1 0 · · · 0 , where ui 1. The case where T has a Toeplitz structure then corresponds with a timeinvariant system for which the impulse response due to an impulse at time i 1 is just the same as the response
due to an impulse at time i, shifted over one position. The computational network is called a state realization
of T, and the number of states at each point of the computational network is called the system order of the
realization at that point in time. For time-invariant systems, the state realization can be chosen constant in
time. Since for time-varying systems the number of state variables need not be constant in time, but can
increase and shrink, it is seen that in this respect the time-varying realization theory is much richer, and that
the accuracy of an approximating computational network of T can be varied in time at will.
1.3. Sparse computational models
If the number of state variables is relatively small, then the computation of the output sequence is efficient
in comparison with a straight computation of y uT. One example of a matrix with a small state space is
the case where T is an upper triangular band-matrix: Ti j 0 for j − i p. In this case, the state dimension
is equal to or smaller than p − 1, since only p − 1 of the previous input values need to be remembered at
any point in the multiplication. However, the state space model can be much more general, e.g., if a banded
matrix has an inverse, then this inverse is known to have a sparse state space (of the same complexity) too,
as we had in the example above. Moreover, this inversion can be easily carried out by local computations
on the realization of T: if T −1 S, then u yS can be computed via
xk
yk
1
xk A k
xkCk
u k Bk
uk Dk
⇔
xk
uk
1
xk Ak −Ck D−1
yk D−1
k Bk
k Bk
−1
−1
−xkCk Dk
yk Dk
hence S has a computational model given by
Ak −Ck D−1
k Bk
−1
Dk Bk
Sk
−Ck D−1
k
D−1
k
(3)
Observe that the model for S T −1 is obtained in a local way from the model of T : Sk depends only on
Tk . The sum and product of matrices with sparse state structure have again a sparse state structure with
number of states at each point not larger than the sum of the number of states of its component systems, and
computational networks of these compositions (but not necessarily minimal ones) can be easily derived from
those of its components.
At this point, one might wonder for which class of matrices T there exists a sparse computational network (or
state space realization) that realizes the same multiplication operator. For an upper triangular (n × n) matrix
T, define matrices Hi (1 ≤ i ≤ n), which are submatrices of T, as
Hi
Ti−1 i
Ti−2 i
..
.
T1 i
Ti−1 i
1
Ti−2 i
1
···
···
..
.
T1 n−1
T2 n T1 n
Ti−1 n
..
.
(see figure 2). We call the Hi (time-varying) Hankel matrices, as they will have a Hankel structure (constant
along anti-diagonals) if T has a Toeplitz structure.1 In terms of the Hankel matrices, the criterion by which
1 Warning:
in the current context (arbitrary upper triangular matrices) the Hi do not have a Hankel structure and the predicate ‘Hankel
matrix’ could lead to misinterpretations. Our terminology finds its motivation in system theory, where the Hi are related to an abstract
operator HT which is commonly called the Hankel operator. For time-invariant systems, HT reduces to an operator with a matrix representation that has indeed a Hankel structure.
4
11 12 13 14 15
22 23 24 25
33 34 35
T
H2
H3
H4
44 45
..
.
55
Figure 2. Hankel matrices are (mirrored) submatrices of T.
matrices with a sparse state structure can be detected is given by the following Kronecker or Ho-Kalman
[10] type theorem (proven in section 3).
Theorem 1. The number of states that are needed at stage k in a minimal computational network of an
upper triangular matrix T is equal to the rank of its k-th Hankel matrix Hk .
Let’s verify this statement for our example. The Hankel matrices are
· · · · 1 2 1 6 1 24 11 36 11 12
24 1 4
1 12
1 24
Since rank H1 0, no states x1 are needed. One state is needed for x2 and one for x4 , because rank H2 rank H4 1. Finally, also only one state is needed for x3 , because rank H3 1. In fact, this is (for this
example) the only non-trivial rank condition: if one of the entries in H3 would have been different, then two
states would have been needed. In general, rank Hi ≤ min i − 1 n − i − 1 , and for a general upper triangular
matrix T without state structure, a computational model will indeed require at most min i − 1 n − i − 1 states
H1
H2
H3
H4
for xi .
1.4. Example: Cholesky Factorization
Computational advantages of the time-varying system theory are not per se restricted to upper triangular operators. To illustrate this, we introduce the following example. Consider a (strictly) positive definite matrix
G, and suppose G has been normalized such that its main diagonal is the identity matrix. It is desired to
obtain a Cholesky factorization of G: a factorization G L∗ L, where L is an upper triangular matrix. For
Toeplitz matrices, this can be done using Schur recursions [1, 11]. The Schur algorithm can be generalized
in various ways to apply to triangular factorizations of general matrices [12], matrices with a displacement
structure [5, 6, 7, 8], and approximate factorizations on a staircase band [13]. In the context of this paper, it
can be shown that if the upper triangular part of G has a state structure, then a computational model of this
part can be used to determine a computational model of the factor L.
A transition to upper matrices is obtained by an analog of the Cayley transformation, which is used to map
positive functions to contractive (scattering) functions. Define P G to be the upper triangular part of G,
G1 I −1 G1 − I is a well-defined and contractive upper triangular matrix:
and G1 2P G − I, then S
S
1. It has a direct relation with G:
! ! I − S −1 ;
−1 This shows that the state structure of G carries over to S: if P G has a computational model with dk states
at point k, then P G −1 and hence also S have computational models with at each point the same number
S
PG
5
I− P G
of states, and can be directly derived using equation (3).
A computational model for P G is obtained using the realization algorithm of section 3. Thus let {A B C D}
be a realization of P G (satisfying certain conditions which we omit at this point). Via the model of S, it is
possible to derive a state model of the factor L satisfying G L∗ L. It turns out that the model of L is given
by
I
Ak Ck
Lk
Bk I
D −Ck∗ MkCk 1 2
" #
where Bk# − D −Ck∗ MkCk −1 Bk −Ck∗ Mk Ak , and Mk is given by the recursion
M1
·
Mk 1 A∗k Mk Ak Bk −Ck∗ Mk Ak ∗ D −Ck∗ MkCk −1 Bk −Ck∗ Mk Ak 6
2. OBJECTIVES OF COMPUTATIONAL MODELING
With the preceding section as background material, we are now in a position to identify the objectives of
our computational modeling. We will assume throughout that we are dealing with upper triangular matrices.
However, applications which involve other type of matrices are viable if they provide some transformation
to the class of upper triangular matrices In addition, we assume that the concept of a sparse state structure is
meaningful for the problem, in other words that a typical matrix in the application has a sequence of Hankel
matrices that has low rank (relative to the size of the matrix), or that an approximation of that matrix by
one whose Hankel matrices have low rank would indeed yield a useful approximation of the underlying
(physical) problem that is described by the original matrix.
For such a matrix T, the generic objective is to determine a minimal computational model {Tk } for it by
which multiplications of vectors by T are effectively carried out, but in a computationally efficient and numerically stable manner. This objective is divided into four subproblems: (1) realization of a given matrix
T by a computational model, (2) embedding of this realization in a larger model that consists entirely of unitary (lossless) stages, (3) factorization of the stages of the embedding into a cascade of elementary (degree-1)
lossless sections. It could very well be that the originally given matrix has a computational model of a very
high order. Then intermediate in the above sequence of steps is (4) approximation of a given realization of
T by one of lower complexity. These steps are motivated below.
Realization
The first step is, given T, to determine any minimal computational network Tk {Ak Bk Ck Dk } that models
T. This problem is known as the realization problem. If the Hankel matrices of T have low rank, then T is
a computationally efficient realization of the operation ‘multiplication by T’.
Lossless embedding
From T, all other minimal realizations of T can be derived by state transformations. Not all of these have
the same numerical stability. This is because the computational network has introduced a recursive aspect to
the multiplication: states are used to extract information from the input vector u, and a single state xk gives
a contribution both to the current output yk and to the sequence xk 1 xk 2 etc. In particular, a perturbation
in xk (or uk ) also carries over to this sequence. Suppose that T is bounded in norm by some number, say
T ≤ 1,2 so that we can measure perturbation errors relative to 1. Then a realization of T is said to be error
insensitive if Tk ≤ 1, too. In that case, an error in xk uk is not magnified by Tk , and the resulting error in
xk 1 yk is smaller than the original perturbation. Hence the question is: is it possible to obtain a realization
for which Tk ≤ 1 if T is such that T ≤ 1? The answer is yes, and an algorithm to obtain such a realization
is given by the solution of the lossless embedding problem. This problem is the following: for a given matrix
T with T ≤ 1, determine a computational model {Σ
Σ k } such that (1) each Σ k is a unitary matrix, and (2) T
is a subsystem of the transfer matrix Σ that corresponds to {Σ
Σ k }. The latter requirement means that T is the
transfer matrix from a subset of the inputs of Σ to a subset of its outputs: Σ can be partitioned conformably
as
Σ11 Σ12
Σ
T Σ11
Σ21 Σ22
! !
!
!
!
!
! !
! !
The fact that T is a subsystem of Σ implies that a certain submatrix of Σk is a realization Tk of T, and hence
from the unitarity of Σ k we have that Tk ≤ 1. From the construction of the solution to the embedding
problem, it will follow that we can ensure that this realization is minimal, too.
! !
2
$ T$
$ $&%
is the operator norm (matrix 2-norm) of T : T
sup
' ' $ uT $
u 2 ≤1
7
2.
u3 0
u4 0
u5 0
u2 0
u6 0
u1 0
u7 0
u8 0
y1
y2
∗
y3
∗
y4
y5
y6
∗
∗
∗
∗
∗
∗
y7
y8
Figure 3. Cascade realization of a contractive 8 × 8 matrix T, with a maximum of 3 states at each point.
Cascade factorization
Assuming that we have obtained such a realization Σ k , it is possible to break down the operation ‘multiplication by Σ k ’ on vectors xk uk into a minimal number of elementary operations, each in turn acting on two
entries of this vector. Because Σ k is unitary, we can use elementary unitary operations (acting on scalars) of
the form
c
s
a1 b1
a2 b2
cc∗ ss∗ 1
∗
−s c∗
i.e., elementary rotations. The use of such elementary operations will ensure that Σ k is internally numerically
stable, too. In order to make the number of elementary rotations minimal, the realization Σ is transformed
to an equivalent realization Σ , which realizes the same system Σ, is still unitary and which still contains a
realization T for T. A factorization of each Σ k into elementary rotations is known as a cascade realization
of Σ. A possible minimal computational model for T that corresponds to such a cascade realization is drawn
in figure 3. In this figure, each circle indicates an elementary rotation. The precise form of the realization
depends on whether the state dimension is constant, shrinks or grows. The realization can be divided horizontally into elementary sections, where each section describes how a single state entry is mapped to an
entry of the ‘next state’ vector xk 1 . It has a number of interesting properties; one is that it is pipelineable,
which is interesting if the operation ‘multiplication by T’ is to be carried out on a collection of vectors u on
a parallel implementation of the computational network. The property is a consequence of the fact that the
signal flow in the network is strictly uni-directional: from top-left to bottom-right, so that computations on a
new vector u (a new uk and a new xk ) can commence in the top-left part of the network, while computations
on the previous u are still being carried out in the bottom-right part.
#
#
#
Approximation
In the previous items, we have assumed that the matrix T has indeed a computational model of an order that
is low enough to favor a computational network over an ordinary matrix multiplication. However, if the rank
of the Hankel matrices of T (the system order) is not low, then it makes often sense to approximate T by a
new upper triangular matrix Ta that has a lower complexity. While it is fairly known in linear algebra how to
obtain a (low-rank) approximant to a matrix in a certain norm (e.g., by use of the singular value decomposition (SVD)), such approximations are not necessarily appropriate for our purposes, because the approximant
should be upper triangular again, and have a lower system order. Because the system order at each point is
given by the rank of the Hankel matrix at that point, a possible approximation scheme is to approximate
each Hankel operator by one that is of lower rank (this could be done using the SVD). However, because the
8
Hankel matrices have many entries in common, it is not clear at once that such an approximation scheme
is feasible: replacing one Hankel matrix by one of lower rank in a certain norm might make it impossible
for the next Hankel matrix to find an optimal approximant. The severity of this dilemma is mitigated by a
proper choice of the error criterion. In fact, it is remarkable that this dilemma has a nice solution, and that
this solution can be obtained in a non-iterative manner. The error criterion for which a solution is obtained is
called the Hankel norm and denoted by · H : it is the maximum over the operator norm (the matrix 2-norm)
of each individual Hankel matrix approximation, and a generalization of the Hankel norm for time-invariant
systems. In terms of the Hankel norm, the following theorem holds true and generalizes the model reduction
techniques based on the Adamyan-Arov-Krein paper [14] to time-varying systems:
! !
Theorem 2. ([15]) Let T be a strictly upper triangular matrix and let Γ diag γi be a diagonal Hermitian matrix which parametrizes the acceptable approximation tolerance (γi 0). Let Hk be the Hankel
matrix of Γ−1 T at stage k, and suppose that, for each k, none of the singular values of Hk are equal to 1.
Then there exists a strictly upper triangular matrix Ta with system order at stage k equal to the number of
singular values of Hk that are larger than 1, such that
!
Γ−1 T − Ta
!H
≤ 1
In fact, there is an algorithm that determines a model for Ta directly from a model of T. Γ can be used to
influence the local approximation error. For a uniform approximation, Γ γ I, and hence T − Ta H ≤ γ : the
approximant is γ-close to T in Hankel norm, which implies in particular that the approximation error in each
row or column of T is less than γ. If one of the γi is made larger than γ, then the error at the i-th row of T can
become larger also, which might result in an approximant Ta to take on less states. Hence Γ can be chosen
to yield an approximant that is accurate at certain points but less tight at others, and whose complexity is
minimal.
!
!
As a numerical example of the use of theorem 2, let the matrix to be approximated be
T 0
0
0
0
0
0
800
0
0
0
0
0
200
600
0
0
0
0
050
240
500
0
0
0
We have indicated the position of the Hankel matrix H4 . Taking Γ
the Hankel operators of Γ−1 T are
H1
σ1 :
σ2 :
σ3 :
013
096
250
400
0
0
003
038
125
240
300
0
0 1 I, the non-zero singular values of
H2
H3
H4
H5
H6
8 26 6 85 6 31 5 53 4 06
0 33 0 29 0 23
0 01
Hence T has a state space realization which grows from zero states (i 1) to a maximum of 3 states (i 4),
and then shrinks back to 0 states (i 6). The number of Hankel singular values of T that are larger than one
is 1 (i 2 · · · 6). This is to correspond to the number of states of the approximant at each point. Using the
9
1
2
3
4
5
6
1
2
3
4
5
6
(a)
(b)
Figure 4. Computational scheme (a) of T and (b) of Ta .
techniques in [15], the approximant can be obtained as
Ta 0
0
0
0
0
0
790
0
0
0
0
0
183
594
0
0
0
0
066
215
499
0
0
0
030
098
227
402
0
0
016
052
121
214
287
0
with non-zero Hankel singular values (scaled by Γ)
H1
σ1 :
H2
H3
H4
H5
H6
8 15 6 71 6 16 5 36 3 82
whose number indeed correspond to the number of Hankel singular values of Γ−1 T that are larger than 1.
Also, the modeling error is
! Γ−1 T − T ! a H
sup{0 334 0 328 0 338 0 351 0 347}
0 351
which is indeed smaller than 1. The corresponding computational schemes of T and Ta are depicted in figure
4.
In the remainder of the paper, we will discuss an outline of the algorithms that are involved in the first three
of the above items. A full treatment of item 1 was published in [16], item 2 in [17], and item 3 was part of
the subject of [18]. Theory on Hankel norm approximations is available [15, 19] but is omitted here for lack
of space.
3. REALIZATION OF A TIME-VARYING SYSTEM
The purpose of this section is to give a proof of the realization theorem for time-varying systems (specialized
to finite matrices): theorem 1 of section 1.3. A more general and detailed discussion can be found in [16].
Recall that we are given an upper triangular matrix T, and view it as a time-varying system transfer operator.
The objective is to determine a time-varying state realization for it. The approach is as in Ho-Kalman’s
10
theory for the time-invariant case [10]. Denote a certain time instant as ‘current time’, apply all possible
inputs in the ‘past’ with respect to this instant, and measure the corresponding outputs in ‘the future’, from
the current time instant on. For each time instant, we select in this way an upper-right part of T: these are its
Hankel matrices as defined in the introduction. Theorem 1 claimed that the rank of Hk is equal to the order
of a minimal realization at point k.
P ROOF of theorem 1. The complexity criterion can be derived straightforwardly, and the derivation will give
rise to a realization algorithm as well. Suppose that {Ak Bk Ck Dk } is a realization for T as in equation (2).
Then a typical Hankel matrix has the following structure:
H6
B5C6
B4 A5C6
B3 A4 A5C6
..
.
B1 A2 · · · A5C6
B5 A6C7
B4 A5 A6C7
B5 A6 A7C8
..
···
.
(6
B5
B4 A5
B3 A4 A5
..
.
B1 A2 · · · A5
· C6 A6C7 A6 A7C8 · · ·
6
(4)
From the decomposition Hk )( k k it is directly inferred that if Ak is of size dk × dk 1 , then rank Hk is at
most equal to dk . We have to show that there exists a realization {Ak Bk Ck Dk } for which dk rank Hk :
if it does, then clearly this must be a minimal realization. To find such a minimal realization, take any
minimal factorization Hk
k k into full rank factors k and k . We must show that there are matrices
{Ak Bk Ck Dk } such that
*(
(
Bk−1
Bk−2 Ak−1
..
.
( k k
Ck
AkCk
1
Ak Ak 1Ck
2
···
To this end, we use the fact that Hk satisfies a shift-invariance property: for example, with H6← denoting H6
without its first column, we have
H6←
(
(
B5
B4 A5
B3 A4 A5
..
.
B1 A2 · · ·A5C6
· A6 · C7 A7C8 A7 A8C9 · · ·
*(
↑
↑
In general, Hk←
k Ak k 1 , and in much the same way, Hk
k−1 Ak−1 k , where Hk is Hk without its
←
first row. The shift-invariance properties carry over to k and k , e.g., k
Ak k 1 , and we obtain that
←
∗
∗
−1
∗
Ak
, where ‘ ’ denotes complex conjugate transposition. The inverse exists bek 1 k 1
k 1
k
cause k 1 is of full rank. Ck follows as the first column of the chosen k , while Bk is the first row of k 1 .
It remains to verify that k and k are indeed generated by this realization. This is straightforward by a
recursive use of the shift-invariance properties.
+
(
( ,
The construction in the above proof leads to a realization algorithm (algorithm 1). In this algorithm, A : 1 :
p denotes the first p columns of A, and A 1 : p : the first p rows. The key part of the algorithm is to obtain
a basis k for the rowspace of each Hankel matrix Hk of T. The singular value decomposition (SVD)[20] is
a robust tool for doing this. It is a decomposition of Hk into factors Uk , Σk , Vk , where Uk and Vk are unitary
matrices whose columns contain the left and right singular vectors of Hk , and Σk is a diagonal matrix with
11
In:
Out:
T
{Tk }
(an upper triangular matrix)
(a minimal realization)
· ( n 1 · for
k n · · · 1 ∗
Hk : Uk Σk Vk
dk rank
Σk ( k Uk Σk : 1 : dk k Vk∗ 1 : dk :
∗
Ak k0
k 1 Ck k : 1
Bk ( k 1 1 :
Dk T k k
n 1
end
Algorithm 1. The realization algorithm.
positive entries (the singular values of Hk ) on the diagonal. The integer dk is set equal to the number of
nonzero singular values of Hk , and Vk∗ 1 : dk : contains the corresponding singular vectors. The rows of
V ∗ 1 : dk : span the row space of Hk . Note that it is natural that d1 0 and dn 1 0, so that the realization
starts and ends with zero number of states. The rest of the realization algorithm is straightforward in view of
the shift-invariance property. It is in fact very reminiscent of the Principal Component identification method
in system theory[21].
The above is only an algorithmic outline. Because Hk 1 has a large overlap with Hk , an efficient SVD updating algorithm can be devised that takes this structure into account. Note that, based on the singular values
of Hk , a reduced order model can be obtained by taking a smaller basis for k , a technique that is known in
the time-invariant context as balanced model reduction. Although widely used for time-invariant systems,
this is in fact a “heuristic” model reduction theory, as the modeling error norm is not known. A precise approximation theory results if the tolerance on the error is given in terms of the Hankel norm[15].
4. ORTHOGONAL EMBEDDING OF CONTRACTIVE TIME-VARYING SYSTEMS
This section discusses a constructive solution of the problem of the realization of a given (strictly) contractive
time-varying system as the partial transfer operator of a lossless system. This problem is also known as the
Darlington problem in classical network theory [22], while in control theory, a variant of it is known as the
Bounded Real Lemma [23]. The construction is done in a state space context and gives rise to a time-varying
Riccati-type equation. We are necessarily brief here; details can be found in [17].
The problem setting is the following. Let be given the transfer operator T of a contractive causal linear timevarying system with n1 inputs and n0 outputs, and let Tk {Ak Bk Ck Dk } be a given time-varying state
space realization of T (as obtained in the previous section). Then determine a unitary and causal multi-port
Σ (corresponding to a lossless system) such that T Σ11 , along with a state realization Σ , where
Σ
-
Σ11
Σ21
Σ12
Σ22
Σk
12
-
AΣ k
BΣ k
CΣ k
DΣ k
Without loss of generality we can in addition require Σ to be a unitary realization: Σ k Σ ∗k I Σ ∗k Σ k I .
Since T ∗ T Σ∗21 Σ21 I, this will be possible only if T is contractive: I − TT ∗ ≥ 0. While it is clear that
contractivity is a necessary condition, we will require strict contractivity of T in the sequel, which is sufficient
to construct a solution to the embedding problem. (The extension to the boundary case is possible but its
derivation is non-trivial.)
Theorem 3. Let T be an upper triangular matrix, with state realization Tk {Ak Bk Ck Dk }. If T is
strictly contractive and T is controllable: k∗ k 0 for all k, then the embedding problem has a solution Σ
with a lossless realization Σ k {AΣ k BΣ k CΣ k DΣ k }, such that Σ11 T . This realization has the following
properties (where T has n1 inputs, n0 outputs, and dk incoming states at instant k):
( ( .
.
.
AΣ is state equivalent to A by an invertible state transformation R, i.e., AΣ k Rk Ak R−1
k 1,
The number of inputs added to T in Σ is equal to n0 ,
The number of added outputs is time-varying and given by dk − dk 1 n1 ≥ 0.
P ROOF (partly). The easy part of the proof is by construction, but the harder existence proofs are omitted.
We use the property that a system is unitary if its realization is unitary, and that T Σ11 if T is a submatrix
of Σ , up to a state transformation.
Step 1. of the construction is to find, for each time instant k, a state transformation Rk and matrices B2 k and
D21 k such that the columns of Σ 1 k ,
Σ1 k
Rk
Ak
Bk
B2 k
I
I
are isometric, i.e., Σ 1 k ∗ Σ 1 k
set of equations
/0
1 02
1
A∗k Mk Ak
B∗k Bk
R−1
k 1
I
I. Upon writing out the equations, we obtain, by putting Mk
M
0
1
A∗ MA
A∗ MC
C∗ MC
B∗ B
B∗ D
D∗ D
which by substitution lead to
Mk
Ck
Dk
D21 k
43 A∗k MkCk 5
R∗k Rk , the
B∗2 B2
B∗2 D21
D∗21 D21
B∗k Dk I − D∗k Dk −Ck∗ MkCk
(5)
−1 3 D∗k Bk Ck∗ Mk Ak
5 This equation can be regarded as a time-recursive Riccati-type equation with time-varying parameters. It can
be shown (see [17]) that I − D∗k Dk −Ck∗ MkCk is strictly positive (hence invertible) if T is strictly contractive
and that Mk 1 is strictly positive definite (hence Rk 1 exists and is invertible) if T is controllable. B2 k and
D21 k are determined from (5) in turn as
I − D∗D −C∗M C 1" 2
I − Dk ∗Dk −Ck ∗Mk Ck −1" 2 D∗B
−
3 k k
k k
k k k
Step 2. Find a complementary matrix Σ 2 k such that Σ k 3 Σ 1 k Σ 2 k 5
D21 k
B2 k
Ck∗ Mk Ak
5
is a square unitary matrix. This is
always possible and reduces to a standard exercise in linear algebra. It can be shown that the system corresponding to Σ k is indeed an embedding of T.
,
The embedding algorithm can be implemented along the lines of the proof of the embedding theorem. However, as is well known, the Riccati recursions on Mi can be replaced by more efficient algorithms that recursively compute the square root of Mi , i.e., Ri , instead of Mi itself. These are the so-called square-root algorithms. The existence of such algorithms has been known for a long time; see e.g., Morf [24] for a list of
13
In:
Out:
Te k
Te# k
Te# k
Σ1 k
! ! {Tk }
{Σ
Σk}
(a controllable realization of T, T
1)
(a unitary realization of embedding Σ)
R1
·
for k 1 · · · n
Σk
Ak Ck
I
Bk Dk
0
I
I
:
Θk Te k
Θk J-unitary, and such that Te k 2 2
Rk 1
0
:
0
0
B2 k D21 k
Ak
Ck
Rk
R−1
k 1
Bk
Dk
I
I
I
B2 k D21 k
Rk
6 Σ1 k
# Te# k 1 2 Te# k 2 1 7
Σ ⊥1 k
0
end
Algorithm 2. The embedding algorithm.
pre-1975 references. The square-root algorithm is given in algorithm 2. The algorithm acts on data known
at the k-th step: the state matrices Ak , Bk , Ck , Dk , and the state transformation Rk obtained at the previous
step. This data is collected in a matrix Te k . The key of the algorithm is the construction of a J-unitary matrix
Θ: Θ∗ JΘ J, where
Θ11 Θ12 Θ13
I
Θ
J
I
Θ21 Θ22 Θ23
−I
Θ31 Θ32 Θ33
# such that certain entries of Te k ΘTe k are zero. We omit the necessary theory on this. It turns out that,
because Θk is J-unitarity, we have that Te∗k J Te k T∗e k J Te k ; writing these equations out and comparing
with (5) it is seen that the remaining non-zero entries of Te k are precisely the unknowns Rk 1 , B2 k and
D21 k . It is also a standard technique to factor Θ even further down into elementary (J)-unitary operations
that each act on only two scalar entries of Te , and zero one of them by applying an elementary J-unitary
rotation of the form
1 1 s
θ
c∗ c s∗ s 1
c s 1
#
#
With B2 and D21 known, it is conjectured that it is not really necessary to apply the state transformation by R
and to determine the orthogonal complement of Σ 1 , if in the end only a cascade factorization of T is required,
much as in [25].
5. CASCADE FACTORIZATION OF LOSSLESS MULTI-PORTS
In the previous section, it was discussed how a strictly contractive transfer operator T can be embedded into
a lossless scattering operator Σ. We will now derive minimal structural factorizations, and corresponding
14
lossless cascade networks, for arbitrary lossless multi-ports Σ with square unitary realizations {Σ
Σ k }. The
network synthesis is a two-stage algorithm:
1. Using unitary state transformations, bring Σ into a form that allows a minimal factorization (i.e., a
minimal number of factors). We choose to make the A-matrix of Σ upper triangular. This leads to a
QR-iteration on the {Ak } and is the equivalent of the Schur decomposition (eigenvalue computations)
of A that would be required for time-invariant systems.
2. Using Givens rotations extended by I to the correct size, factor Σ into a product of such elementary
sections. From this factorization, the lossless cascade network follows directly.
While the factorization strategy is more or less clear-cut, given a state space matrix that allows a minimal
factorization, the optimal (or desired) cascade structure is not. We will present a solution based on Σ itself.
However, many other solutions exist, for example based on a factorization of a J-unitary transfer operator
related to Σ, yielding networks with equal structure but with different signal flow directions; this type of
network is favored in the time-invariant setting for selective filter synthesis and was first derived by Deprettere and Dewilde [26] (see also [27]). To avoid eigenvalue computations, cascade factorizations based on a
state transformation to Hessenberg form are also possible [28, 29]. In the time-varying setting, eigenvalue
computations are in a natural way replaced by recursions consisting of QR factorizations, so this motivation
seems no longer to be an issue.
5.1. Time-varying Schur decomposition
Let Ak be the A-matrix of Σ at time k. The first step in the factorization algorithm is to find square unitary
state transformations Qk such that
Rk
(6)
Q∗k Ak Qk 1
has Rk upper triangular. If Ak is not square, say of size dk × dk 1, then Rk will be of the same size and also
be rectangular. In that case, ‘upper triangular’ is understood as usual in QR-factorization, i.e., the lower-left
d × d corner (d min dk dk 1 of Rk consists of zeros (figure 5). In the time-invariant case, expression (6)
would read Q∗ AQ R, and the solution is then precisely the Schur-decomposition of A. In that context, the
main diagonal of A consists of its eigenvalues, which are the (inverses of the) poles of the system. In the
present context, relation (6) is effectively the (unshifted) QR-iteration algorithm that is sometimes used to
compute eigenvalues if all Ak are the same [20]:
Q∗1 A1
Q∗2 A2
Q∗3 A3
: R1 Q∗2
: R2 Q∗3
: R3 Q∗4
···
Each step in the computation amounts to a multiplication by the previously computed Qk , followed by a
QR-factorization of the result, yielding Qk 1 and Rk . Since we are in the context of finite upper triangular
matrices whose state realization starts with 0 states at instant k 1, we can take as initial transformation
Q1
·.
5.2. Elementary Givens Rotations
We say that Σ̂ is an elementary orthogonal rotation if Σ̂ is a 2 × 2 unitary matrix,
Σ̂
c∗
−s∗
15
s
c
(7)
0
0
(a)
0
(b)
(c)
Figure 5. Schur forms of Σ. (a) Constant state dimension, (b) shrinking state dimension, (c) growing state dimension.
with c∗ c s∗ s 1. An important property of elementary rotations is that they can be used to zero a selected
entry of a given operator: for given a and b, there exists an elementary orthogonal rotation Σ̂ such that
a0# # a∗a b∗b 1" 2. In this case, Σ̂ is called a Givens rotation, and we
i.e., such that s∗ a c∗ b 0 and a 8
write Σ̂ givens a; b in algorithms. Givens rotations will be used in the next section to factor a given state
Σ̂∗
a
b
realization into pairs of elementary rotations, or elementary sections. The basic operation, the computation
of one such pair, is merely the application of two elementary Givens rotations: let T be a 3 × 3 matrix
T
a
b1
b2
c1
d11
d21
such that it satisfies the orthogonality conditions a∗
rotations Σ 1 Σ 2 such that Σ ∗2 Σ ∗1 T T , with
Σ1
c∗1
−s∗1
s1
c1
#
Σ2
1
c2
d12
d22
1 0 0 , then there exist elementary
4
1 0
0
s2
T # 0 d1# 1 d1# 2 c2
0 d2# 1 d2# 2
b∗1
b∗2 T
c∗2
1
−s∗2
5.3. Factorization
Let be given a lossless state realization Σ of a lossless two-port Σ. For each time instant k, we will construct
a cascade factorization of Σ k by repeated use of the above generic factorization step. Assume that a preprocessing state transformation based on the Schur decomposition has been carried out, i.e., that each Σ k has its
Ak upper triangular. For the sake of exposition, we specialize to the case where Ak is a square d × d matrix
and Σ has a constant number of two inputs and outputs, but the method is easily generalized. Thus
Σk
9
Ak
Bk
Ck
Dk
a
·
•
·
·
•
·
·
·
•
b1
b2
·3
·4
·5
·6
·
·
c1
·
·
·
d11
d21
c2
·
·
·
d12
d22
(8)
For i 1 · · · d, j 1 2, let Σ̂i j be an elementary (Givens) rotation matrix, and denote by Σ i j the extension
of Σ̂i j to an elementary rotation of the same size as Σ , with Σi j I except for the four entries i i i d
j d j i d j d j , which together form the given Σ̂i j . Then Σ k admits a (minimal) factorization
Σk
Σ1 1 Σ1 2 · Σ2 1 Σ2 2 · · · Σd 1 Σd 2 · Σ # 16
(9)
– if dk
– if dk
Σk
Σi j}
{Σ
Σ i j } {Σ
In:
Out:
(in Schurform; Ak : dk × dk 1, n1 inputs, n0 outputs)
(elementary rotations: factors of Σ k )
#
dk
dk
(‘shrink’), move first dk − dk 1 rows of Ak Ck to Bk Dk .
1 (‘grow’), move first dk 1 − dk rows of Bk Dk to Ak Ck .
1
for i 1 · · · dk
for j 1 · · · n1
Σ̂i j
givens Ak i i ; Bk j i
Σk :
Σ ∗i j Σ k
end
end
#
Σ k DΣk
(also factor ‘residue’)
for i 1 · · · n0
for j 1 · · · n1
Σ̂i j
givens Σ i i ; Σ j i
:
Σ
Σ i ∗j Σ
##
# #
# # end
Algorithm 3. The factorization algorithm.
#
into extended elementary rotations, where the ‘residue’ Σ is a state realization matrix of the same form as
Σ k , but with A I, B C 0, and D unitary. The factorization is based on the cancellation, in turn, of the
entries of Bk of Σ k in equation (8). To start, apply the generic factorization of section 5.2 to the equallynamed entries a, b1 , b2 etc. in equation (8), yielding Givens rotations Σ̂1 1 and Σ̂1 2 , which are extended by I
I, and of necessity zeros
to Σ 1 1 and Σ 1 2 . The resulting state space matrix Σ ∗1 2 Σ ∗1 1 Σ k has 1 1 -entry a
on the remainder of the first column and row. The factorization can now be continued in the same way, in the
order indicated by the labeling of the entries of Bk in equation (8) by focusing on the second-till-last columns
and rows of this intermediate operator. The result is the factorization of Σ k in (9). Algorithm 3 summarizes
the procedure for the general case of non-square Ak -matrices. In the case that the state dimension shrinks,
i.e., dk dk 1 , then the first dk −dk 1 states are treated as inputs rather than states, but the actual factorization
algorithm remains the same. If the state dimension grows (dk dk 1 ), then the states that are added can be
treated as extra outputs in the factorization algorithm.
#:
With the above factorization, it is seen that the actual operations that are carried out are pairs of rotations.
The network corresponding to this factorization scheme is as depicted in figure 6, where each circle indicates
an elementary section as in equation (7). This picture is obtained by considering the sequence of operations
that are applied to a vector x1 k x2 k · · · xdk k ; uk zk when it is multiplied by Σ k in factored form. Each
state variable interacts with each input quantity uk zk , after which it has become a ‘next state’ variable.
The residue Σ appears as the single rotation at the right. In the picture, we put zk 0 to obtain T as transfer
uk → yk . The secondary output of Σ is discarded.
#
17
;
x1 k
;
;
x2 k
x3 k
;
x4 k
;
u1 k
;
;
u2 k
y2 k
;
y1 k
z
z
;<1
x1 k
(a)
;
x1 k
;
x2 k
z
;<1
x2 k
z
;<1
x3 k
;
;<1
x4 k
;
x3 k
x4 k
;
u1 k
;
;
y3 k
y2 k
u1 k
z
;<1
x1 k
(b)
z
;<1
x2 k
;
;
z
y1 k
;<1
x3 k
;
;
x3 k
x2 k
;
x4 k
;
x5 k
;
u1 k
;
yk
u2 k
z
;<1
x1 k
(c)
z
;<1
x2 k
z
;<1
x3 k
z
;<1
x4 k
= >
z
;<1
x5 k
= >
Figure 6. Lossless cascade realizations of a contractive system T, stage k. a Constant state dimension, b shrinking
state dimension, c growing state dimension. Outputs marked by ‘∗’ are ignored.
= >
18
6. CONCLUDING REMARKS
In the preceding sections, we have presented algorithms to compute, for a given upper triangular matrix, a
computational model of lowest possible complexity. We have also derived synthesis algorithms to realize
this model as a lossless cascade network in which the basic processors are elementary (Givens) rotations.
This provides a numerically stable implementation in which a minimal number of parameters (the rotation
angles) are used. The number of operations needed to implement a matrix-vector multiplication is 2dk
elementary rotations for a stage with dk states, or
8dk multiplications per stage. The total number of
8dn if d
n is some averaged number of states, as compared to
multiplications that are required is
1 2 n2 for a direct matrix-vector multiplication. Of course, it is possible to select other structures than the
lossless cascade structure to realize a given computational model. Finally, if the number of states at any point
is larger than desired, then it is possible to find optimal approximating matrices: for a given error tolerance,
measured in the Hankel norm, it is known how many states the approximating computational network will
require at least.
The realization and approximation theory presented in this paper are consequences of time-varying systems
theory. This theory can be applied in many ways to reduce matrix computations if the given matrix exhibits
what was called a state structure, and can be used to determine new types of matrix approximations for enforcing such a state structure onto matrices. The computational expensive part of the presented scheme is
the retrieval of the structure of the given matrix, and it is even more expensive to compute a reduced order
model. This is an instance of a more general property on computations with structured matrices: algorithms
which exploit the structure can be efficient only after the structure has been captured in some way, which
either requires advance knowledge of this structure (for example, the fact that a matrix is Toeplitz or has
zeros at specific entries), or will be computationally expensive, because all entries of the matrix must be operated upon at least once. In the case of matrices with state structure, the derivation of an (approximating)
model will make sense only if one is interested in a sparse representation of the given matrix, that is, if the
resulting model is heavily used in other computational procedures. The focus of the paper on matrix-vector
multiplications must in this respect be regarded only as an example on how one can exploit knowledge of
such sparse models.
Additional results obtained during the review process include the inversion of a general (not necessarily
upper triangular) matrix using time-varying state space techniques [30]. More details on the discussed topics
are available in [31].
7. ACKNOWLEDGEMENT
This research was supported in part by the commission of the EC under the ESPRIT BRA program 6632
(NANA2).
8. REFERENCES
[1] I. Schur, “Uber Potenzreihen, die im Innern des Einheitskreises Beschränkt Sind, I,” J. Reine Angew.
Math., 147:205–232, 1917 Eng. Transl. Operator Theory: Adv. Appl., vol. 18, pp. 31-59, Birkhäuser
Verlag, 1986.
[2] N. Levinson, “The Wiener RMS Error Criterion in Filter Design and Prediction,” J. Math. Phys.,
25:261–278, 1947.
[3] I. Gohberg and A. Semencul, “On the Inversion of Finite Toeplitz Matrices and their Continuous
Analogs,” Mat. Issled., 2:201–233, 1972.
[4] J. Chun, T. Kailath, and H. Lev-Ari, “Fast Parallel Algorithms for QR and Triangular Factorizations,”
SIAM J. Sci. Stat. Comp., 8(6):899–913, 1987.
19
[5] T. Kailath, S.Y. Kung, and M. Morf, “Displacement Ranks of Matrices and Linear Equations,” J. Math.
Anal. Appl., 68(2):395–407, 1979.
[6] H. Lev-Ari and T. Kailath, “Lattice Filter Parametrization and Modeling of Non-Stationary Processes,”
IEEE Trans. Informat. Th., 30(1):2–16, January 1984.
[7] H. Lev-Ari and T. Kailath, “Triangular Factorizations of Structured Hermitian Matrices,” In Operator
Theory: Advances and Applications, volume 18, pp. 301–324. Birkhäuser Verlag, 1986.
[8] H. Lev-Ari and T. Kailath, “Lossless Arrays and Fast Algorithms for Structured Matrices,” In Ed. F.
Deprettere and A.J. van der Veen, editors, Algorithms and Parallel VLSI Architectures, volume A, pp.
97–112. Elsevier, 1991.
[9] J. Chun, “Fast Array Algorithms for Structured Matrices,” PhD thesis, Stanford Univ., Stanford, CA,
1989.
[10] B.L. Ho and R.E. Kalman, “Effective Construction of Linear, State-Variable Models from Input/Output
Functions,” Regelungstechnik, 14:545–548, 1966.
[11] T. Kailath, “A Theorem of I. Schur and its impact on Modern Signal Processing,” In Operator Theory:
Advances and Applications, volume 18, pp. 9–30. Birkhäuser Verlag, Basel, 1986.
[12] H. Ahmed, J. Delosme, and M. Morf, “Highly Concurrent Computing Structures for Matrix Arithmetic
and Signal Processing,” Computer, pp. 65–82, January 1982.
[13] P. Dewilde and E. Deprettere, “The Generalized Schur Algorithm: Approximation and Hierarchy,” In
Operator Theory: Advances and Applications, volume 29, pp. 97–116. Birkhäuser Verlag, 1988.
[14] V.M. Adamjan, D.Z. Arov, and M.G. Krein, “Analytic Properties of Schmidt Pairs for a Hankel Operator and the Generalized Schur-Takagi Problem,” Math. USSR Sbornik, 15(1):31–73, 1971 (transl. of
Iz. Akad. Nauk Armjan. SSR Ser. Mat. 6 (1971)).
[15] P.M. Dewilde and A.J. van der Veen, “On the Hankel-Norm Approximation of Upper-Triangular Operators and Matrices,” Integral Equations and Operator Theory, 17(1):1–45, 1993.
[16] A.J. van der Veen and P.M. Dewilde, “Time-Varying System Theory for Computational Networks,”
In P. Quinton and Y. Robert, editors, Algorithms and Parallel VLSI Architectures, II, pp. 103–127. Elsevier, 1991.
[17] A.J. van der Veen and P.M. Dewilde, “Orthogonal Embedding Theory for Contractive Time-Varying
Systems,” In Proc. IEEE ISCAS, pp. 693–696, 1992.
[18] P.M. Dewilde, “A Course on the Algebraic Schur and Nevanlinna-Pick Interpolation Problems,” In
Ed. F. Deprettere and A.J. van der Veen, editors, Algorithms and Parallel VLSI Architectures, volume A, pp. 13–69. Elsevier, 1991.
[19] A.J. van der Veen and P.M. Dewilde, “AAK Model Reduction for Time-Varying Systems,” In J. Vandewalle e.a., editor, Proc. Eusipco’92 (Brussels), pp. 901–904. Elsevier Science Publishers, August
1992.
[20] G. Golub and C.F. Van Loan, “Matrix Computations,” The Johns Hopkins University Press, 1984.
[21] S.Y. Kung, “A New Identification and Model Reduction Algorithm via Singular Value Decomposition,” In Twelfth Asilomar Conf. on Circuits, Systems and Comp., pp. 705–714, Asilomar, CA, November 1978.
[22] S. Darlington, “Synthesis of Reactance 4-Poles which Produce Prescribed Insertion Loss Characteristics,” J. Math. Phys., 18:257–355, 1939.
[23] B.D.O. Anderson and S. Vongpanitlerd, “Network Analysis and Synthesis,” Prentice Hall, 1973.
[24] M. Morf and T. Kailath, “Square-Root Algorithms for Least-Squares Estimation,” IEEE Trans. Automat. Control, 20(4):487–497, 1975.
[25] H. Lev-Ari and T. Kailath, “State-Space Approach to Factorization of Lossless Transfer Functions and
Structured Matrices,” Lin. Alg. Appl., 162:273–295, February 1992.
[26] E. Deprettere and P. Dewilde, “Orthogonal Cascade Realization of Real Multiport Digital Filters,”
Circuit Theory and Appl., 8:245–272, 1980.
[27] P.M. Van Dooren and P.M. Dewilde, “Minimal Cascade Factorization of Real and Complex Rational
Transfer Matrices,” IEEE Trans. Circuits Syst., 28(5):390–400, 1981.
[28] S.K. Rao and T. Kailath, “Orthogonal Digital Filters for VLSI Implementation,” IEEE Trans. Circuits
Syst., 31(11):933–945, November 1984.
[29] U.B. Desai, “A State-Space Approach to Orthogonal Digital Filters,” IEEE Trans. Circuits Syst.,
38(2):160–169, February 1991.
20
[30] A.J. van der Veen and P.M. Dewilde, “Connections of Time-Varying Systems and Computational Linear Algebra,” In H. Dedieu, editor, Circuit Theory and Design: Proc. 11-th ECCTD, pp. 167–172,
Davos, Switzerland, August 1993. Elsevier.
[31] A.J. van der Veen, “Time-Varying System Theory and Computational Modeling: Realization, Approximation, and Factorization,” PhD thesis, Delft University of Technology, Delft, The Netherlands, June
1993.
21
Fly UP