Static state feedback linearizability : relationship between two methods

The paper establishes the explicit relationship between two sets of necessary and sufficient conditions for static state feedback linearizability of a discrete-time nonlinear control system. A detailed algorithm is presented for finding the state coordinate transformation. Finally, the methods are compared from the point of view of computational complexity. Two examples illustrate the theoretical results.


INTRODUCTION
Feedback linearization has proved to be a tremendously useful tool in nonlinear control.We will restrict ourselves to discrete-time nonlinear control systems described by state equations and to static state feedback linearization.The number of publications on static state feedback linearization of continuous-time nonlinear systems is huge, but the situation is different for the discrete-time case [1][2][3][4][5][6].Most results, except [7], focus on smooth feedback.However, note that it is not our purpose to address the approximate solutions of the problem (see [8] and the references therein).
The main aim of the paper is to find explicit relationship between the two best-known sets of necessary and sufficient linearizability conditions.The first and the earliest set of conditions is stated in terms of a sequence of nested distributions of vector fields, associated with the control system [3].The second set of conditions is given via a decreasing sequence of codistributions of differential one-forms [5].Detailed algorithms are given for computation of the sequences of distributions.Moreover, the connection between the two methods for finding the state coordinate transformation, corresponding to the above linearizability conditions, is clarified.Finally, the alternative conditions and methods are compared from the point of view of computational complexity, and experience from the respective implementations in the symbolic computation system Mathematica is discussed.Two examples are given to illustrate the theoretical results.
The paper is organized as follows.Section 2 gives the statement of the static state feedback linearization problem.Proposition 1, proved in Section 3, specifies the vector fields for which the backward shift is a well-defined operator.A detailed algorithm for computation of the sequence of distributions (that relies on the backward shift) is then given and finally the linearizability conditions in terms of vector fields are recalled.Section 4 recalls the linearizability conditions formulated in terms of codistributions and Section 5 finds the explicit relationship between two sets of conditions.Section 6 relates the methods for finding the new state coordinates, corresponding to two alternative linearizability conditions.Section 7 contains the discussion and the examples, and Section 8 concludes the paper.

STATIC STATE FEEDBACK LINEARIZABILITY
Consider a discrete-time nonlinear control system Σ of the form where f : (M ×U) → M + , and the variables x = (x 1 , . . ., x n ) and u = (u 1 , . . ., u m ) are the local coordinates of the state space M and the input space U, respectively; f is assumed to be an analytic function of its arguments, and M + is the forward shifted state space with the coordinates x + = (x + 1 , . . ., x + n ).In the discrete-time case the local study is useless around an arbitrary initial state, since even in one step the state can move far away from the initial state, regardless of the small control.One possibility is to work around an equilibrium point of the system.Another, more general possibility is to work around a reference trajectory that satisfies the system of equations (1).In this paper we adopt the first option.Definition 1.A point (x 0 , u 0 ) ∈ M ×U is called an equilibrium point of system (1) if f (x 0 , u 0 ) = x 0 .
In the study of discrete-time nonlinear control systems the following assumption is usually made.It guarantees the forward shift operator, defined by equations (1), to be injective.Note that this assumption is not restrictive as it is always satisfied for accessible systems [9].
Definition 3. [3,4] System (1) is said to be static state feedback linearizable around its equilibrium point (x 0 , u 0 ) if there exist 1. a state coordinate transformation S : M → M defined on a neighbourhood X of x 0 with S(x 0 ) = 0, 2. a regular static state feedback of the form u = α(x, v), satisfying the condition α(x 0 , 0) = u 0 and defined on a neighbourhood X × O of (x 0 , 0) such that, in the new coordinates z = S(x), the compensated system reads z + = Az + Bv, where the pair (A, B) is in Brunovsky canonical form.

LINEARIZABILITY CONDITIONS IN TERMS OF VECTOR FIELDS
In [3] the local linearizability conditions for system (1) around (x 0 , u 0 ) are formulated in terms of a sequence of distributions D k ⊂ T (M × U), k = 0, . . ., n, associated with system (1).Here T (M × U) is the tangent fibre bundle of the space M × U, where the fibres are the tangent vector spaces defined at each point (x, u) ∈ M ×U.Define the distribution K as the kernel of the tangent map T f of the state transition map f : T f (K) = 0.Under Assumption 1, dim K = m.Moreover, the distribution K as a kernel of a tangent map is always involutive.Therefore, one can define the commutable basis such that for i, j = 1, . . ., m.In the neighbourhood of (x 0 , u 0 ) the sequence of distributions D k , k = 0, . . ., n, is introduced: where, provided the distribution D k + K is involutive and D k ∩ K is constant dimensional, ∆ k+1 can be found from by applying the backward shift [3] 1 .Note that ∆ k+1 is defined by ( 5) as the span of vector fields {T f (D k )} − ; however, these vector fields do not necessarily exist, in general, since the backward shift operator is a welldefined operation on T M + iff the basis vectors of D k respect the distribution K, i.e., if for every basis element Ξ of D k and for the vector field holds.
Consider an arbitrary vector field Ξ, defined on the manifold M ×U: Multiplying (7) with the Jacobi matrix of submersion f gives the vector field Although this vector field belongs to T M + , its components {T f (Ξ)} I are still expressed as the functions of the coordinates (x, u).The vector field T f (Ξ) ∈ T M + is defined on the manifold M + iff one can, using equations (1), write its components {T f (Ξ)} I in terms of the coordinates x + exclusively.The latter is possible if the components of the vector field (8) can be expressed as the composite functions Proposition 1.The vector field (8) can be expressed in the form (9) iff (6) holds.
Proof.The submersion f defines the fibre bundle on the manifold M × U.The base manifold of this fibre bundle is M + with coordinates x + .The fibres are the integral surfaces of the distribution K.By (3) one can choose the fibre coordinates to be the canonical parameters χ i = g i (x, u) of the vector fields K j : Therefore, the base coordinates x + and the fibre coordinates χ + are defined by the following coordinate transformation, denoted by F: x + = f (x, u), χ + = g(x, u).The corresponding Jacobi matrix reads Multiplying the vector field Ξ (see (7)) by T F yields with the second term belonging to the distribution K. Its components can be expressed in terms of the base coordinates x + exclusively iff they do not depend on the fibre coordinates χ + i .According to (10), this is equivalent to the property that the Lie derivatives for i = 1, . . ., m, I = 1, . . ., n. Due to (8), the components of T f (Ξ) are actually the scalar products Replacing {T f (Ξ)} I in ( 12) by the right-hand side of (13), we obtain, using the differentiation rule of the scalar product, Note that dx Example 1 in Section 7 illustrates the both situations, i.e. when T f (Ξ) cannot be shifted back and when it can be shifted back; see formulas (37) and (38), respectively.
Below, detailed algorithms are given for computation of the sequences D k and ∆ k , respectively.Both algorithms require the state equations of the control system to be given.Algorithm 1 gives the sequence D k , or informs that the computation of the sequence cannot be completed, because either D k−1 + K is not involutive or D k−1 ∩K is not constant dimensional around the equilibrium point.In the latter case the control system is not static state feedback linearizable.Algorithm 2 computes the sequence ∆ k .

Algorithm 1: Computation of the sequence D k
Initialization.Define the distribution K as the kernel of the tangent map T f of the state transition map f in (1), T f (K) = 0, and D 0 as in (4).Set D 0 := span{Ξ 0 }, such that the vector fields Ξ 0 respect K, k := 1 and go to step k.
Step k.Check whether D k−1 + K is involutive and D k−1 ∩ K constant dimensional around (x 0 , u 0 ).If not, then stop (the system is not static state feedback linearizable), otherwise continue.
Note that some vector fields, added at the previous step (or for step 1 at initialization) into D k−1 may not respect the distribution K.If this is so, find the same number of independent linear combinations of Ξ 0 , . . ., Ξ k−2 and [T f (Ξ k−2 )] − , denoted by Ξ k−1 that respect the distribution K, and be such that and go to the next step.Otherwise, the system is not static state feedback linearizable.
Note that N is the minimal integer such that dim Algorithm 2: Computation of the sequence ∆ k Algorithm 1 can be modified accordingly to compute the sequence ∆ k .In that case one starts with ∆ 1 = span{Ξ 1 } instead of D 0 , and at the kth The linearizability conditions for system (1) are formulated in the following theorem.

LINEARIZABILITY CONDITION IN TERMS OF DIFFERENTIAL ONE-FORMS
In [5] the linearizability conditions for system (1) are given in terms of the decreasing sequence of codistributions H k ⊂ T * M, defined locally around the equilibrium point by3 There exists an integer and H ∞ is the maximal codistribution, invariant with respect to the forward shift.To compute the one-forms ω + • T f in (15), consider first the forward shift of an arbitrary one-form obtained by replacing all the state coordinates x I in (16) by their forward shifts x + I : Then the one-form where, by replacing x + in ω J (x + ) by f (x, u), one obtains The linearizability conditions for system (1) are formulated in the following theorem.
In [5] the complete integrability has been checked via the Frobenius Theorem.
However, Theorem 4 below suggests an alternative rule which does not require the computation of exterior derivatives of the basis vectors of H k , as well as their wedge products with the basis vectors of H k .The proof of Theorem 4 is based on the following Lemma.

Lemma 1. If the codistributions H k , H k+1 , . . . , H 1 are integrable, or equivalently, one can define the state coordinates X, adapted to H
Proof of H λ can also be defined as the coordinates of the subspace B λ .This means that if an arbitrary 1-form ω ∈ H λ is defined in the subspace B λ , then its components can be expressed only in terms of variables Xk r k , X k−1 i k−1 , . . ., X λ i λ as the integrals of H λ .Finally, according to the definition of the codistributions H k for an arbitrary 1-form ω ∈ H k its forward shift ω + • T f belongs to H k−1 , which is assumed to be integrable.That is, the components of ω + • T f can be expressed only in terms of the integrals Xk Theorem 4. If the codistribution H k is not integrable, then there does not exist the complete cobasis for H k , where the components of the basis 1-forms can be expressed without using the backward shifts of the state coordinates.
Proof.Suppose H k is not integrable, but H k−1 , . . ., H 1 are; then the number of the independent integrals Xk rk of H k equals dim Hk , where Hk ⊂ H k is the maximal integrable subspace of H k .Then one can define at least dim H k − dim Hk independent 1-forms ω k λ k , where dim Hk + 1 ≤ λ k ≤ dim H k , and whose components cannot be expressed in terms of integrals Xk rk .Consequently, we need some additional variables I k µ k to express the components of ω k λ k , since the other adapted coordinates X k−1 i k−1 , . . ., X 1 i 1 cannot be used for this purpose, as will be shown below.
First we will prove that the components of + , and alternatively also in terms of Xk r k , X k−1 i k−1 .The latter yields that the forward shifts I k µ k + must also be the integrals of H k−1 and the variables I k µ k are the backward shifts of some integrals Consequently, the components of the forward shifts At the same time, according to the definitions of the sequence of codistributions H k , the 1-forms ω k Let us prove now that the variables I k µ k cannot be the functions of other variables The proof is by contradiction.Suppose that one can express That is, one can alternatively express the components of ω k λ k ∈ H k as the functions of variables Xk Consequently, the components of ω k + .However, according to the definition of codistributions + are not the integrals of H k−1 , which leads to contradiction.
To conclude, if H k is not integrable, then the variables I k µ k are the backward shifts of the coordinates X k−1 i k−1 , but one cannot express them in terms of coordinates X.Therefore, one cannot express the components of all its basis 1-forms in terms of the adapted coordinates X as in (17); one needs to use additionally the backward shifts of the state coordinates.
The converse statement of Theorem 5 also holds.The proof is based on the application of the Frobenius Theorem and is trivial.

RELATIONSHIP BETWEEN THE LINEARIZABILITY CONDITIONS
In order to find the explicit relationship between the two sets of linearizability conditions, formulated by Theorems 1 and 2, respectively, we first reformulate Theorem 1 in terms of the distributions ∆ k .In order to do so, we need the following lemma.
Proof.We prove first that dim∆ k+1 = dim(D k ) − dim(D K ∩ K).Due to (2), D K + K = span{Ξ α , α = 1, . . ., ρ k , K i , i = 1, . . ., m}; there Ξ α ∈ D K , α = 1, . . ., ρ k are linearly independent vector fields such that As K = kerT f , one can rewrite (5) as yielding Because the number of vector fields Ξ α (and so also then all the possible Lie brackets of the vector fields Ξ α and K i must also belong to D k + K.By (3), According to Proposition 1, the vector fields T f (Ξ α ) exist iff Ξ α respect the distribution K, meaning Finally, the third involutivity condition for We have to prove next that all the possible Lie brackets of the vector fields T f (Ξ α ) belong to the distribution ∆ + k+1 , defined by (20), since then and only then the distribution ∆ + k+1 (and also ∆ k+1 ) is involutive.Applying the tangent map T f to (23) and taking into account T f (K) ≡ 0 yields for all α and β According to the property of the Lie brackets (see, e.g., [4], p. 50), holds and, consequently, because of (24), proving the involutivity of ∆ k+1 .Conversely, assume that ∆ k+1 as the backward shift of ∆ + k+1 , defined by (20), is involutive.Then (26) holds and due to (25), also (23) must hold.Since the conditions (21) and ( 22) are also satisfied, the involutivity of D k + K follows.
We are now ready to reformulate Theorem 1.
Proof.The proof is by induction.First we prove that the codistribution H 2 is the maximal annihilator of the distribution ∆ 1 .Due to definitions (4) and (15), obviously The left-hand side of (27) may be interpreted as the multiplication of three matrices.Matrix H + 2 is a dimH 2 × n-matrix, whose rows are the basis elements of H + 2 .The second one, T f , is the n × (n + m) Jacobi matrix of the state transition map f .The third matrix D 0 is the (n + m) × m-matrix, whose columns are the basis vectors of D 0 .Due to the associativity property of matrix multiplication, one can rewrite (27) as H + 2 , T f (D 0 ) ≡ H + 2 , ∆ + 1 ≡ 0, and as the value of a constant does not change by the backward shift, the latter yields H 2 , ∆ 1 ≡ 0. Therefore, H 2 annihilates ∆ 1 on the manifold M.
To show that H 2 is also the maximal annihilator of ∆ 1 , suppose conversely that there exists a 1-form ω ∈ T M not belonging to H 2 but still annihilating ∆ 1 : ω, ∆ 1 = 0 and ω ∈ H 2 .The relationship ω, ∆ 1 = 0 must be valid also after applying to it the forward shift and the associativity property of matrix multiplication According to (28), ω + • T f must belong to H 1 as the maximal annihilator of D 0 .From above, we have ω + • T f ∈ H 1 and ω ∈ H 2 , which leads to a contradiction.Therefore, H 2 is the maximal annihilator of ∆ 1 on the manifold M, or, equivalently, due to (4), the maximal annihilator of D 1 on the manifold M ×U.
Next suppose that H k is the maximal annihilator of D k−1 on the manifold M ×U.Then H k , D k−1 = 0.According to definition (15), Again, due to the associativity property of a matrix multiplication, definition (15), and the invariance of a scalar product with respect to the (backward) shift, we may write In order to show that H k+1 is also the maximal annihilator of ∆ k on M, suppose contrarily the existence of a one-form θ ∈ T M such that θ ∈ H k+1 and θ , ∆ k = 0.As before, one can show that this leads to a contradiction.Consequently, H k+1 is the maximal annihilator of ∆ k on M.Moreover, because dim∆ N = n ja H N * +1 is the first zero codistribution in the sequence, also N * = N. Corollary 1.

From involutivity and constant dimensionality of distribution ∆ k integrability and constant dimensionality
of codistribution H k+1 follows for all k = 1, . . ., N − 1 and vice versa.
Proof.Follows directly from Theorem 6.
The result of Corollary 1 enables combination of both methods.This will be done in Section 6, where the state coordinate transformation is completed by using the H k subspaces even if one checks the linearizability condition via the sequence of distributions ∆ k .

METHOD FOR FINDING THE NEW STATE COORDINATES
The new state coordinates z I , necessary for the static state feedback linearization of system (1), can be found via integration of the basis vectors of the codistributions H k , k = 1, . . ., N * , constructed in a specific way by using the detailed algorithm below; see also [5].Note that the basis vectors of the codistributions H k+1 , found either by (15) or by the method in [5], are not necessarily exact and the state coordinates cannot be found from arbitrary exact basis vectors.When linearizability conditions are satisfied, H k+1 is integrable, and one can always find the exact basis for H k+1 , which can be used for finding the new coordinates.
The shifts of z N−k+1 i N−k+1 will be defined as the next subset of the new state coordinates If yes, then stop.Otherwise set k := k + 1 and go to the next step.
Alternatively, one may compute, according to Algorithm 2, the sequence of distributions ∆ k = span{Ξ 1 , . . ., Ξ k }, k = 1, . . ., N. Again, as all basis vectors of H k+1 are not necessarily exact, the vector fields Ξ 1 , . . ., Ξ k do not necessarily commute.Therefore, one has to replace them by their commuting linear combinations Ξ1 , . . ., Ξk such that ∆ k = span{ Ξ1 , . . ., Ξk }.Next, define the canonical parameters of the vector fields Ξl that satisfy the condition dX k i k , Ξl i l = δ kl δ i k i l .By Theorem 6, the canonical parameters X k of the distribution ∆ k provide an integrable basis for the codistribution H k+1 , k = 1, . . ., N − 1.However, this set of state coordinates X k is in general not yet suitable for linearization.Therefore, one may continue with Algorithm 3 to find the new state coordinates.

DISCUSSION AND EXAMPLES
We have implemented both methods for checking feedback linearizability and the method for finding the new state coordinates in the package NLControl built within the computer algebra system Mathematica.Since the functions from the package NLControl cannot be used outside the Mathematica environment, we have developed a webMathematica-based application that allows the functions from NLControl to be used on the world-wide web, in such a way that no other software except for an internet browser is necessary.The developed webpage is available at http://webmathematica.cc.ioc.ee/webmathematica/NLControl/ .
As the first step of the computations, the backward shift operator, defined by the system equations, has to be found, as described in [5].For that one has to solve a system of n nonlinear equations with respect to n variables.The solution of such a system may exhibit a much more complex expression than the state transition map of the original control system and it may also happen that Mathematica is not able to find the solution.In a few cases the solution cannot, in principle, be expressed in terms of the elementary functions.Even in the latter case it is still possible, though extremely rare, that the system is feedback linearizable.
In principle, if the system is linearizable, then the linearization procedure can be carried out without knowing the backward shift operator.However, in practice it may be complicated, because in order to compute the backward shift of the one-form (or, alternatively, that of the vector field), the variables on negative time-instances may occur, but can be eliminated by using certain replacement rules defined by the control system.The backward shift operator gives explicit rules for replacing the mentioned variables; otherwise one has to try to eliminate them just by using the relations imposed by the original control system equations.The latter method is much less reliable and works successfully only with small systems.For instance, to find the backward shift operator for the system one has to express x 2 from x + 1 = ux 2 sin x 2 , which is beyond the abilities of Mathematica 7.0.However, if the method from [5] (see the proof of Lemma 4.3) is applied to find H 2 = span{−dx 1 + (x 2 sin x 2 ) − dx 2 } for system (29), then we actually need only the backward shift of the product x 2 sin x 2 , that is The system is feedback linearizable by the state transformation z = (x 1 /x 2 , x 2 sin x 2 ).But it is not possible to express the static state feedback in the form u = α(x, v), because v = u sin u and thus u cannot be expressed in terms of elementary functions.However, replacing sin x 2 by e x 2 , for example, in (29), yields a system, for which the backward shift operator can be found easily, since Mathematica is able to find the solution of the equation To compute the sequence of codistributions of differential one-forms, three alternative methods are available.The first of them uses the method from [5], the second is based on the formula H k+1 = (H k H + k ) − , for k ≥ 1, and the third method computes the sequence of distributions ∆ k and then finds H k+1 's as their maximal annihilators.
In the majority of cases the fastest method is the one based on the method from [5].Even if our goal is only to check the feedback linearizability property of the system, which means that using the third method, we may only compute the distributions ∆ k and check their involutivity, the first method in most cases still works faster.Of course, there exist exceptions, often characterized by the occurrence of exponential and logarithmic functions, for which the distributions ∆ k give the result faster.Unfortunately, it is not possible to predict the fastest linearization method only by visual inspection of the function f .
The only difficulty known to us, which may occur in using the first method (and occurs extremely rarely), is that in case the expressions, found on the previous steps, have not been enough simplified, there is a chance that Mathematica may be unable to solve the system of equations and the computation fails.
The second method turns out to be slower than the first method, but in most cases it is faster than method 3.If the basis vectors of H k are not simplified enough, some basis vectors can be lost from the intersection H k H + k .Addition of too many simplification commands into the program makes it work slowly.
If we need only to check the feedback linearizability property, an additional method is possible: instead of the subspaces H k , one may use the subspaces I k , introduced in [10] and defined by I 1 = span K {dx(0)}, Between the sequences H k and I k the relations I ∞ = H ∞ and I k = δ k−1 H k hold, where δ is the forward-shift operator.Moreover, the subspace I k is completely integrable iff H k is completely integrable.This method is especially useful when it is complicated to find the backward shift operator or it cannot be found at all.Then I k provides us with the only practical way for checking feedback linearizability.On the other hand, if the function f is complex and the backward shift operator is defined by a simple expression, then the computation of I k 's usually takes more time than that of H k 's and ∆ k 's.In case the subspaces are not integrable, the expression of I k is usually more complex than the corresponding H k .
Note that the algorithm for finding the state coordinates actually requires integration (solution) of differential equations and is thus only constructive if the latter subproblem is solvable.Though over the years the capabilities of Mathematica to integrate the one-forms for medium-size medium-complexity problems have improved, it has still not good enough facilities for this task.To improve the capabilities of Mathematica in this respect, we have implemented an additional function IntegrateOneForms.This function replaces the integration of the set of one-forms by solution of the sequence of linear homogeneous PDEs and is based on the algorithm described in [11]; see also [12].Despite this extension, the solution of the set of the partial differential equations is still often unsuccessful, especially for high-complexity examples.
The final change of variables, which represents the system in the new state coordinates, requires again the inverse state transformation to be found and thus is a potential point of failure.The latter transformation is sometimes a quite time-demanding operation and may take more time than all the previous computations.
Note that none of the methods discussed in this paper is adapted for approximate calculations, therefore all decimal fractions are transformed to rational numbers.
Finally, note that the results on state feedback linearization with Mathematica for continuous-time nonlinear systems have been reported in [13] and with Maple in [14].
To illustrate the results of this paper, we provide two examples.One of them is an academic example, the other is the model of a truck trailer.
Example 1.Consider the system One may define the new state coordinates z from the sequence of codistributions H 1 = span{dx 1 , dx 2 , dx 3 }, H 2 = span{dx 3 }, computed for equations (30).According to the results of Section 6, H 2 defines the first new state coordinate z 2 1 = x 3 .The next new state coordinate is its forward shift Finally, to complete the span of the 1-forms dz 2 1 and dz 1 2 to the cobasis H 1 of the state space, we add the 1-form dz 1  1 = dx 1 .Consequently, the last new state coordinate is z 1 1 = x 1 .The state equations in the new coordinates are yielding the linear equations after applying the static state feedback, defined by Alternatively, one may find the new state coordinates from the sequence of distributions ∆ k .Compute the tangent map of the state transition map as follows: and its kernel K = KerT f = span {K 1 , K 2 }, where Next, compute, according to (5), the distribution where The distribution D 0 + K is involutive as both K and D 0 are involutive.Next, observe that the vector field ∂ /∂ u 1 commutes with the vector fields K 1 and K 2 but the vector field According to Proposition 1, one cannot, due to (36), express the components of the vector field T f (∂ /∂ u 2 ) in terms of x + exclusively.Really (see 30), Choose another basis for D 0 that respects K, for example, Example 2. Consider the model of a truck trailer [15]: where x 1 is the angle between the truck and the trailer, x 2 is the angle between the trailer and the road, x 3 and x 4 are the vertical and horizontal positions of the rear end of the trailer, respectively, L is the length of the trailer, l is the length of the truck, t is the sampling interval, and v is the velocity of the truck.The input parameter u is the steering angle.We are going to check, using Theorems 5 and 2, respectively, whether system (44) is static state feedback linearizable.Compute first, for (44), Next compute the distribution ∆ 1 , spanned by the vector field In order to simplify the computations, we multiply the vector field (45) by the constant l/tv, obtaining Obviously, the distribution ∆ 1 is involutive.Applying the tangent map T f to the distribution ∆ 1 = span{∂ /∂ x 1 }, we obtain The distribution ∆ + 2 (and therefore, also ∆ 2 ) is not involutive and therefore, according to Theorem 5, system (44) is not static state feedback linearizable.
Alternatively, one may compute the codistributions: Again, it is easy to check, for example via the Frobenius Theorem, that the codistribution H 3 is not integrable, and therefore according to Theorem 2, system (44) is not static state feedback linearizable.

CONCLUSION
In this paper the problem of static state feedback linearizability of a discrete-time nonlinear control system has been addressed.The paper focuses on establishing the explicit relationship between the two sets of necessary and sufficient linearizability conditions.The first set of conditions is formulated in terms of involutivity of an increasing sequence of certain distributions of vector fields.The second set of conditions is formulated in terms of integrability of the decreasing sequence of the codistributions of differential oneforms.We have demonstrated that the distributions used in the first set of conditions are the maximal annihilators of the corresponding codistributions used in the second set of conditions.Moreover, two methods have been compared from the point of view of computational complexity.Note that the new state coordinates are the canonical parameters of the commutative basis vector fields (with specific properties) of the distribution ∆ N and also the integrals of the corresponding codistributions H 1 .The method based on oneforms is easier than that based on vector fields, since in the latter case the new state coordinates are computed in two steps.At the first step one has to find canonical parameters of an arbitrary commutative basis vector field of the distribution ∆ N , which actually corresponds to finding the integrals of the corresponding codistribution H 1 .At the second step one modifies the integrals of H 1 to get the integrals with specific properties.