Polynomial accessibility condition for the multi-input multi-output nonlinear control system

The paper presents a computation-oriented necessary and sufficient accessibility condition for the set of nonlinear higherorder input-output differential equations. The condition is presented in terms of the greatest common left divisor of two polynomial matrices, associated with the system of input-output equations. The basic difference from the linear case is that the elements of the polynomial matrices belong to a non-commutative polynomial ring. The condition found provides a basis for finding the accessible representation of the set of input-output equations, which is a suitable starting point for the construction of an observable and accessible state space realization. Moreover, the condition allows us to check the transfer equivalence of two nonlinear systems.


INTRODUCTION
Controllability is one of the fundamental concepts of mathematical control theory.In the nonlinear case the famous Kalman rank condition branches into many different controllability notions, and among them, the accessibility property stands out as a conceptually tractable and intuitively clear concept [23].The special notion of controllability, accessibility, refers to the case where every non-constant function of the state is eventually influenced by the control variable of the system and hence cannot satisfy any autonomous differential equation [6].
Our study is motivated by three sources.The first of them is the fundamental problem of accessibility itself.There exist input-output (i/o) equations of interest that lack state-space realizations and therefore the techniques that rely on state-space characterization of accessibility are unsuitable for such systems.Second, the accessibility property plays a crucial role in the minimal realization problem; and third, accessibility is related to the concept of transfer equivalence of two systems.
The goal of this paper is to suggest an accessibility definition as well as a computation-oriented necessary and sufficient accessibility condition for nonlinear systems, described by the set of higher-order i/o differential equations, not necessarily transformable into the state-space form.Our accessibility definition is based on the concept of autonomous variable, introduced in [21], since this definition is not directly linked to the state and is therefore flexible to be applied to various system classes.Formerly, this concept was called irreducibility of i/o equations but it is more natural to call it accessibility since in the case of the realizable i/o equations, our definition agrees with that given for the state equations: accessible i/o equations have accessible realization.This is demonstrated by Example 2 below.
The results of this paper are based on the conference paper [13].Compared with [13], in the present paper irreducible system representation (accessible subsystem) is related to the recently introduced concept of transfer matrix of the nonlinear system [8,9], whereas that in [13] was based on the notion of an irreducible differential form, associated with the control system [6].Moreover, the notion of the transfer equivalence of systems is now defined via the equality of transfer matrices, exactly like in the linear case, and the reduction problem is straightforwardly addressed.Furthermore, the proof of the main theorem is improved (while only the sketch of the proof was given in [13]) and the role of non-uniqueness of the greatest common left divisor to the solution is explained.Finally, comparison with alternative algebraic accessibility criteria is added.Note that the single-input single-output (SISO) case was studied in [27].For discrete-time nonlinear systems the accessibility property was examined in [16], though the paper itself was focused on finding the minimal realization of the set of i/o equations and accessibility was studied only from the viewpoint of irreducibility of the i/o description of the system.The irreducibility problem for the SISO case has also been treated in [12], being a generalization of [27] to the systems defined on homogeneous time-scales.
The paper is organized as follows.Section 2 describes the differential field and a polynomial matrix representation, associated to nonlinear systems.Using the polynomial matrix description, in Section 3 necessary and sufficient accessibility condition is given.Section 4 is devoted to the system reduction and concept of transfer equivalence.In Section 5 the obtained results are compared with the results of [4] and illustrated by two examples, one realizable in the state-space form and the other not.Section 6 draws the conclusions and drafts some future goals of study.

POLYNOMIAL MATRIX DESCRIPTION
Consider a multi-input multi-output (MIMO) nonlinear system described by the set of higher-order i/o differential equations relating the inputs u k , k = 1, . . ., m, the outputs y i , i = 1, . . ., p, and a finite number of their time derivatives: ( In (1) φ i are supposed to be real analytic functions.Notations u := [u 1 , . . ., u m ] T , y := [y 1 , . . ., y p ] T , n := n 1 + • • • + n p , and r := max{r ik } are used below for system (1).Moreover, we assume that the indices in (1) satisfy the relations The conditions (2) mean that Eqs (1) are assumed to be in the form which is an extension of the echelon canonical matrix fraction description, introduced in [22] for linear systems.This form is preferred since it allows the explicit definition of the time-derivative operator in the differential field, associated with the control system.This aspect is important in Mathematica R implementation (see below).However, the main results of the paper may also be proved (using a somewhat different mathematical setup) for the implicit system description.Moreover, if the well-defined i/o equations are not in the form (1), one may apply the i/o equivalence transformations1 from [11,24,25] to bring the system equations into the form (1). Below we give a brief exposition of the linear algebraic approach, following [6].Let R denote the ring of analytic functions in a finite number of (independent) variables from the set C = {y i , . . ., y k , k = 0, . . ., m, β 0}.Define the time derivative operator d/dt : R → R, associated with system (1).For that purpose we first define d/dt for the elements of C and then for a({y Sometimes the notation ȧ := d/dt(a) is also used.Note that whenever y occurs in the expression, it has to be replaced by φ i (•) determined by the i/o equations (1).The pair (R, d/dt) is a differential ring (see [10]).The ring R is an integral domain, i.e. it does not contain any zero divisors.That is, ∀a, b ∈ R, ab = 0 ⇒ a = 0 or b = 0. Let S := R \ {0}, and consider the set of left fractions denoted as K := S −1 R. Elements of K are meromorphic functions in the form b −1 a, where a ∈ R, b ∈ S .Since R is an integral domain, the set K proves to be the field of fractions.The derivative operator d/dt can now be extended so that d/dt : Thus the pair (K , d/dt) is a differential field.
Over the field K one can define a vector space E := span K {dϕ | ϕ ∈ K }, spanned by the ordinary differentials of the elements of K .For the definition of the operator d : K → E see [6].The paper [7] demonstrates when and how the operator d as used in this paper differs from Kähler differentials.The derivative ω of ω = ∑ i α i dϕ i ∈ E , where α i , ϕ i ∈ K , is defined by ω = ∑ i αi dϕ i +α i d φi .Note that operators d and d/dt commute, i.e. for a ∈ K , We say that ω ∈ E is an exact one-form if ω = da for some a ∈ K .A one-form ω for which dω = 0 is said to be closed.Every exact one-form is closed, but the converse holds only locally (see [6]).A subspace is said to be closed or integrable if it has a basis which consists only of closed one-forms.

Non-commutative ring of polynomials
The where s is a formal variable and p i ∈ K for i = 0, . . .m. Polynomial p = 0 iff at least one of the functions p i is non-zero.If p m ≡ 0, then the integer m is called the degree of p and denoted by deg(p).We set additionally deg(0) = −∞.The addition of the polynomials is defined in the standard way.However, for a ∈ K ⊂ K [s; d/dt] the multiplication is defined by the commutation rule It is easy to see that for s2 • a = as 2 + 2 ȧs + ä, a ∈ K , and in general, for n 0 we obtain To find the left divisor, one can use the left Euclidean division algorithm (see [5]).The main idea behind this algorithm is that for given polynomials p 1 and p 2 in the forms (5), with deg p 1 > deg p 2 , there exists a unique left quotient polynomial γ 1 and a unique left remainder polynomial p 3 such that

Polynomial matrices
We now consider a class of matrices P whose elements are polynomials p ∈ K [s; d/dt] of unbounded degree.We write K [s; d/dt] v×q for the set of v × q matrices with entries in K [s; d/dt].The purpose of this subsection is to show that like in the linear case when the polynomials have real coefficients, the polynomial matrix with entries in K [s; d/dt] can be transformed by a sequence of elementary column operations to the lower left triangular form.This result allows us to obtain later the accessibility criterion.Definition 2. The following three elementary column operations on the polynomial matrix P are defined: 1. interchange of columns i and j, 2. multiplication of column i by nonzero scalar in K , 3. replacement of column i by itself plus any polynomial multiplied by any other column j.
Any sequence of elementary column operations on P is equivalent to post multiplication (right multiplication) of P by an appropriate unimodular matrix U R . 2   Definition 4. Two polynomial matrices P and P will be called column equivalent iff one of them can be obtained from the other by a sequence of elementary column operations.
The matrix P is thus column equivalent to P if and only if P = PU R where U R is a unimodular matrix.

Theorem 5. Any q × v (q v) polynomial matrix P is column equivalent to the lower left triangular matrix shown below, i.e. one can always find a sequence of elementary column operations which reduces P to the lower left triangular form where
Furthermore, in the above form, the polynomials g k1 , . . ., g k,k−1 are of lower degree than g kk for all k = 1, . . ., v if deg g kk > 0, and are all zero if g kk is a nonzero scalar in K .
Proof.If the first row of P, p 1i ≡ 0, one may choose a polynomial of the least degree from its elements and, by permutation of the columns, make it the new (1, 1) entry p11 .We then apply the left Euclidean division algorithm to every other nonzero entry in the first row.That is, we divide every other nonzero element p1i in the first row by p11 obtaining the quotients q1i and remainders r1i according to the relationship p1i = p11 q1i + r1i where either q1i = 0 or deg r1i < deg p11 .We then subtract from each nonzero ith column the first column multiplied by q1i .If not all of the remainders r1i are zero, we choose one of the least degree and make it the new (1, 1) entry by another permutation of the columns.The purpose of repeating this process as many times as necessary is to reduce step by step the degree of the polynomial element in the (1, 1) entry.Since the degree of the (1, 1) entry is finite, this repeated process ends at some steps, in particular, when all of the remaining elements of the first row are identically zero.
Next consider the second row of this matrix and, ignoring the first column for the moment, apply the above procedure to the elements beginning with the second column and second row.In this way we zero all the elements to the right of the (2, 2) entry.
If the (2, 1) element is of equal or higher degree than that of the (2, 2) element, the division algorithm can be employed to reduce the (2, 1) element to the remainder term associated with the division of the (2, 1) entry by the (2, 2) entry, or to zero if both elements are scalars.Continuing in this manner, with the elements beginning with the third column and third row next, we eventually reduce P to the appropriate form.The proof is analogous with the discrete time case (see [16], Theorem 10) and therefore omitted.The gcld is, in general, not unique, since any two gclds, G 1  L and G 2 L , are related by definition as where W 1 and W 2 are polynomial matrices and if G 1 L is nonsingular, then W 1 and W 2 are unimodular.The concept of a gcld now enables us to extend to the matrix case the notion of a pair of relatively left prime polynomials.Definition 8.A pair {P, Q} of polynomial matrices which have the same number of rows is said to be relatively left prime iff their gclds are unimodular matrices.
The notion of a pair of relatively left prime polynomial matrices implies the inability to factor some non-unimodular matrix from the left side of both members of the pair.

Polynomials as the operators
A left differential polynomial a ∈ K [s; d/dt] may be interpreted as an operator a(s) : E → E .Define for j = 1, . . ., p, k = 1, . . ., m, and dy j , du k ∈ E It is natural to extend (8) for a = ∑ k i=0 a i s i as a(s)(αdζ ) := ∑ k i=0 a i (s i • α)dζ with a i , α ∈ K and dζ ∈ {dy 1 , . . ., dy p , du 1 , . . ., du m }.Using (8), every one-form where a j,α , b k,β ∈ K and 0, may be expressed in terms of the left differential polynomials as where

Polynomial description of linearized equations
By applying the operator d to (1) we obtain for i = 1, . . ., p dy Since dy (α) j = s α dy j and du ) can be rewritten as where P ∈ K [s; d/dt] p×p and Q ∈ K [s; d/dt] p×m are polynomial matrices, whose elements p i j , q ik ∈ K [s; d/dt] are whereas Equation (10) describes the (globally) linearized system, associated with Eqs (1).

ACCESSIBILITY OF THE i/o EQUATIONS Definition 9. [6]
A non-constant (possibly vector) function3 ϕ r (with components) in K is said to be an autonomous variable for system (1) if there exist an integer µ 1 and a non-zero meromorphic (again possibly vector) function F such that r , . . ., ϕ Note that ϕ r denotes a variable as well as a function of y, u, and their time derivatives.While ϕ r is a function of y (i) and u ( j) , 0 i n − µ, 0 j n − µ, it is also governed by the autonomous differential equation (12).For any initial condition, the solution ϕ r is uniquely determined by this autonomous differential equation and is consequently independent of the external input u.In this sense ϕ r is an autonomous variable which represents the lack of controllability of the nonlinear system [21].
The notion of autonomous variable can be used to define the accessibility of the nonlinear system (1) as follows: Definition 10.System (1) is said to be accessible if it does not admit any non-constant autonomous variable.Otherwise the system is called non-accessible.
Theorem 11.The nonlinear system (1) is accessible iff the polynomial matrices P and Q in (10) are relatively left prime.
Proof.Sufficiency.The proof is by contradiction.Suppose that P and Q are relatively left prime but contrary to our claim, system (1) is not accessible.Then, according to Definition 10, there exist (at least one) function ϕ r ∈ K , an integer µ 1, and F = (F 1 , . . ., F p ) T , such that (12) holds.Denote ϕ := F(ϕ r , φr , . . ., ϕ and where n − µ.Since s i dy j = dy r , ( 13) and ( 14) can be rewritten in terms of left differential polynomials: Then the equation d ϕ = 0 can be rewritten as Ĝ(s)[ P(s)dy + Q(s)du] = 0.The remaining part of the proof relies on the fact that p + ν differential forms, defined by p rows of P(s)dy − Q(s)du and ν rows of d ϕ, are dependent.In order to simplify the proof, we assume that F contains a single element, i.e.F = (F 1 ) and ν = 1.The proof of the general case is analogous.Let dϕ i := ∑ j p i j (s)dy j − ∑ k q ik (s)du k , i = 1, . . ., p, where p i j and q ik are the elements in the ith rows of the matrices P and Q, respectively.Then there exist α i ∈ K , i = 1, . . ., p, at least one of them non-zero such that d ϕ = ∑ p i=1 α i dϕ i .Without loss of generality one can assume that α 1 = 0. Then we have Therefore P(s)dy + Q(s)du = G L (s) • P(s)dy + Q(s)du , where Since deg Ĝ = µ > 0, the gcld of the matrices P and Q, i.e. the matrix G L is not unimodular.Hence P and Q are not relatively left prime.
Necessity.The proof of the necessity part is by contradiction.Assume that system (1) is accessible but the matrices P and Q are not relatively prime.The latter means that P and Q have a gcld G L in the form (7), which is not unimodular, such that Eq. ( 10) can be written as where ω = [ω 1 , . . ., ω p ] T := P(s)dy + Q(s)du (17) is a column-vector of irreducible differential one-forms.We proceed to show that the components of one-form ω are exact or can be made exact by multiplying ω with a unimodular matrix U from the left.The elements of ω may be classified as follows: they are either differentials of the original irreducible equations (integrable by definition) or the elements of H ∞ , defined by (4).According to [1], the subspace H ∞ is closed.This means that its basis vectors are exact or can be made exact by multiplying ω with a certain unimodular matrix U from the left.This just corresponds to changing the gcld G L by G L U −1 =: G L , note that G L is unique up to multiplication with the unimodular matrix from the right.From ( 16) we now obtain where dϕ r = [dϕ r1 , . . ., dϕ rp ] T .The vector of one-forms G L (s)dϕ r = P(s)dy + Q(s)du corresponds to the original linearized equations (10) and is thus integrable.This means that the coefficients of polynomials in G L either depend only on the components of ϕ r or are real numbers.Since G L is non-unimodular, there exists at least one component of F, for instance F i , and at least one component of ϕ r , for instance ϕ r j , such that F i (. . ., ϕ (µ) r j , . . . = 0, µ 1.According to Definition 9, the system admits autonomous variable ϕ r j , and hence (1) is not accessible.
is an integral domain, K (s; d/dt) is the field.Then one can consider a class of matrices, whose elements are left fractions f ∈ K (s; d/dt) (see [8]).Let K (s; d/dt) p×m be the set of p × m matrices with their entries from K (s; d/dt).Multiplying the globally linearized system equations (10) from the left by P −1 allows us to rewrite (10) as dy = P −1 (s)Q(s)du , whenever the i/o equations (1) are independent, i.e. whenever the Dieudonné determinant of the matrix P is nonzero [14].The inverse matrix P −1 ∈ K (s; d/dt) p×p can be computed by the Gauss-Jordan elimination method, adapted for non-commutative polynomials in [20].If the polynomial matrices P and Q are not relatively left prime, i.e.P = G L • P and is the non-unimodular gcld of P and Q), we get Therefore, system (1) can be characterized by a matrix from K (s; d/dt) p×m .In our case it is the matrix From the above definition we can conclude that the irreducible set of equations is transfer equivalent to the original system description (1).

DISCUSSION, EXAMPLES, AND MATHEMATICA R IMPLEMENTATION
In this section we first compare the notion of accessibility, introduced in this paper, with the notion of controllability, as it is defined in [4] for the set of i/o equations.The comparison obviously holds only for the subclass of systems studied in [4], that is for systems, defined by polynomial equations in variables y, ẏ, ÿ, etc. over the field of rational functions of the variables u, u, ü, etc.For this class of systems the two notions practically coincide.Really, controllability is defined as R being differentially algebraically closed in the differential field K , defined by the system equations (1).This just means that system (1) does not admit a variable φ r ∈ K that satisfies an autonomous differential algebraic equation F(φ r , φr , . . ., φ (µ) r ) = 0 for some µ 1.Note that in our paper F may be a meromorphic function.Moreover, the paper [4] does not provide a method for finding the accessible (irreducible) system description, though the method for checking accessibility is given in [17].
The algorithms for checking the accessibility and reduction of a non-accessible (reducible) system have been implemented in the computer algebra system Mathematica R as a part of the package NLControl, devoted to modelling, analysis, and synthesis problems of nonlinear control systems.The functions are also available on the NLControl website www.nlcontrol.ioc.ee.The main benefit of the website is that the user does not need Mathematica R to be installed into a local computer; only internet connection and a browser are necessary to run the functions.Note the difference in terminology: on the NLControl website the accessibility of the i/o system can be checked using the function Irreducibility.The irreducible equations, which also represent the accessible subsystem, can be found by the function Reduction.
Example 2. Consider the system described by the i/o differential equations System (20) satisfies conditions (2).Linearized equations of (20) as in (10) are determined by the polynomial matrices According to Theorem 7, one can find a gcld G L of P and Q by reducing the composite matrix Following the algorithm in the proof of Theorem 5 yields Since G L is not a unimodular matrix (the maximum degree of the polynomials on the main diagonal is higher than 0), system (20) is not accessible.It also means that the system can be reduced.That is, one can find the polynomial matrices P and Q that define the reduced linearized system equations P(s)dy + Q(s)du = 0 by solving P = G L P and Q = G L Q for P and Q, respectively4 : In the reduction process we have assumed that y 2 = 0, (i.e.y 2 ∈ S ).This guarantees that the coefficients of the polynomials are from the field K .Now, by (17), the one-forms ω 1 and ω 2 may be found, such that G L (s)[ω 1 , ω 2 ] T = 0: Note that the one-form ω 2 is exact, ω 2 = dϕ r2 , but ω 1 is only closed, and one has to multiply it by an integrating factor y 2 to obtain the exact one-form dϕ r1 = y 2 ω 1 .Altogether, it means multiplying ω by the unimodular matrix U = y 2 0 0 1 from the left, which can be considered as choice of the different gcld5 Indeed, the system P(s)dy + Q(s)du = 0 may be rewritten as The reduced i/o equations can be found by integrating dϕ r1 and dϕ r2 : Note that system (20) can be rewritten as F 1 (•) = ϕ r1 = 0, F 2 (•) = ϕ r1 + 2ϕ r2 − φr2 − φr1 + φr2 + ... ϕ r2 = 0, but the reduced Eq.( 22) are not in the form (1). It is easy to notice that Eq. ( 22) can be simplified further and transformed into the form (1), using the linear i/o equivalence transformations [11].Namely, choosing ϕ r1 := ϕ r1 − φr2 = y 2 −u 2 y 2 + ẏ1 and ϕ r2 := ϕ r1 −ϕ r2 = u 1 y 2 − ẏ2 yields the i/o equivalent system description Replacing ( 22) by ( 23) may be just interpreted as a different choice of the gcld Really, note that matrices G L and G L are column equivalent due to the relation is a unimodular matrix since Now the one-forms, corresponding to the irreducible equations, are simpler and are both exact.Their integration will yield the irreducible i/o equations (23).According to Definition 14, systems (20) and ( 23) are transfer equivalent.Finally, note that (20) can be rewritten as As recalled in the introduction, reduction of the i/o equations is an integral part of finding the minimal realization.The state-space description is said to be a realization of the set of the i/o equations (1) if equations ( 1) and ( 24) have the same solution sets {u(t), y(t)}.System (1) is called realizable if the state-space form (24) exists for it.The realizability property can be checked using the sequence of subspaces (4): system (1) has an observable realization in the form (24) iff the subspace H r+2 , r being the highest order of input derivative in (1), is integrable.Integrating the basis vectors of H r+2 yields the state coordinates [3].Moreover, note also that the realization of the i/o equations ( 1) is accessible, iff Eq. ( 1) are irreducible [6].
Observe that the realization of the reducible Eq. ( 20), as expected, is not accessible.Indeed, the state variables for system (20) may be defined as According to [6], the system is accessible iff H ∞ = {0}.For the state equations above we obtain H ∞ = span K {dx 3 , dx 4 , dx 2 + dx 5 } = { 0 }.
For Eq. ( 23) one may choose the state variables as x 1 = y 1 and x 2 = y 2 , and then the state equations have the form ẋ1 = (u 2 − 1)x 2 , ẋ2 = u 1 x 2 .For the latter system H ∞ = { 0 }, thus the accessibility condition is fulfilled.
The i/o equation in Example 3 below lacks the state-space form.
Example 3. Consider the model of "ball and beam", [6,19], described by the SISO equation where angle u is the input of the system, y is the position of the ball and is considered as the output of the system, J, R, m, and g are constant parameters.
Compute H 1 = span K {dy, d ẏ, du, d u}, H 2 = span K {dy, d ẏ, du}, and H r+2 = H 3 = span K {dy, Ad ẏ − 2mR 2 y udu}, where A = J + mR 2 .The subspace H 3 is not integrable and thus the i/o equation ( 25) is not realizable.However, the accessibility of ( 25) may be checked directly by Theorem 11.Globally linearized equations (10) for system (25) are represented by matrices Since P and Q both contain a single element, the transformation of the composite matrix [ P | Q ] into the triangular form reduces to the computation of the gcld of two polynomials.We obtain ).For u ∈ S , we assume that u = 0. Since deg γ = 0, the matrix G L is unimodular and thus, system (25) is, by Theorem 11, accessible.
Example 4. In the paper it was silently assumed that F(0, . . ., 0) = 0 in (12), yielding ϕ r = 0.This assumption is not natural in the nonlinear case, though widely used in the literature (for instance in [6]).However, in some (rare) cases such assumption may yield a situation when the irreducible one-form for the system can be found, while the reduced i/o equations do not exist [26].Such an example is given by the i/o equation ϕ := ÿu − ẏ u = 0.
In [26] it was shown that the minimal realization6 of ( 26) is not accessible.The irreducible form of ( 26) is ϕ r = ẏ/u, but taking ϕ r = 0 results in a degenerate i/o equation ẏ = 0.For this reason system (26) is considered irreducible in [26] (and its realization minimal), and it serves as a counterexample for the fact that the minimal realization is accessible.This conclusion is in disagreement with the classical realization theory.Replacing the assumption F(0, . . ., 0) = 0 by F(c, 0, . . ., 0), as done in [2], allows us to avoid such mismatch.However, understanding c as a variable leads to several irreducible equations (one for each fixed c).Establishing a theory without the restriction F(0, . . ., 0) = 0 brings along unexpected results.Namely, a time-invariant reducible i/o equation of the form (1) may admit only a time-varying irreducible equation; thus extension is neither trivial nor direct (see more in [15]).

CONCLUSIONS
The paper presented accessibility definition and a computation-oriented necessary and sufficient accessibility condition for nonlinear multi-input multi-output differential equations which are formulated in terms of the gcld of the polynomial matrices associated with the globally linearized system equations.
Note that the polynomials are defined over the differential field, and unlike the linear case, belong to a non-commutative polynomial ring.On the basis of the above condition a constructive algorithm using the Euclidean left division is suggested to examine the system accessibility.The proposed condition and algorithm are consistent with those for the linear system and SISO nonlinear system in [27].Note that though the accessibility property can be checked within the polynomial approach, this is not so with the system reduction.The algorithm described in this paper results in the vector of one-forms ω = [ω 1 , . . ., ω p ] T , related to the irreducible system description.To find the irreducible equations themselves, one has to integrate ω.Though the set of one-forms ω 1 , . . ., ω p is, in principle, proved to be integrable, in general one has to multiply ω with the unimodular polynomial matrix from the left to make the components of the one-forms exact.This amounts to finding another gcld from the class of all column-equivalent gclds.
Recall that the reduction algorithm, based largely on the proof of Theorem 7, does not necessarily result in the reduced equations of the form (1). Though there exist the algorithm to transform the reduced equations into such a form [11], it would be nice to develop the reduction algorithm that will give directly the reduced equations in the form (1).

Definition 6 .
If three polynomial matrices satisfy the relation P = C L Q, then C L is called a left divisor of P and P is called a right multiple of C L .The greatest common left divisor (gcld) of two polynomial matrices V and W is a common left divisor which is a right multiple of every common left divisor of V and W . Theorem 7. Consider the pair {P, Q} of polynomial matrices which have the same number of rows.If the composite matrix [P | Q ] is reduced to the lower left triangular form [G L | 0 ] as in the proof of Theorem 5, then G L is a gcld of P and Q.

j
, s j du k = du ( j) k , and s k dϕ r = dϕ (k)