We often come across a vector valued function which needs linearization. Depending on the form though, doing it manually is very tedious and prone to error. In this post, let us try to use a computer algebra system, maxima and try to get a linear approximation of a vector valued function.
The principle here is that we need to use the Taylor series to obtain linear approximation of the function around a point . Read this post from Khan academy which explains the concept quite well.
We could use sympy available as Python library for symbolic algebra. It is lot more intuitive compared to maxima (atleast for me). Here is the code:
from sympy import * init_printing( use_unicode=True ) # Define the symbols x1, x2, u = symbols( 'x1 x2, u') a,b,c = symbols( 'a b c') # Define the function we are going to linearize f = Matrix( [x1**2 + sin(x2) -1 , -x2**3 + u] ) f_at = f.subs( [(x1,a), (x2,b)] ) J = f.jacobian([x1,x2]) J_at = J.subs( [(x1,a), (x2,b)] ) delta = Matrix([x1,x2]) - Matrix([a,b]) # Tailor Series Linearization f_approx = (f_at + J_at * delta).expand() # Getting the coefficient matrix symbols_list = (x1,x2,u) coef = Matrix( [ [rowi_of_f.coeff(s) for s in symbols_list ] for rowi_of_f in f_approx ] ) # A * (x1,x2,u) + b A = coef b = f_approx - coef * Matrix([ x1, x2, u] )
Let us try to get a linear approximation of the example function using maxima. Following is the code:
/* The input function */ f( x1, x2, u ) := [ x1^2 + sin(x2) - 1, -x2^3 + u ]; /* Jacobian of this function */ J(x1, x2, u ) := jacobian( f(x1, x2, u), [x1, x2, u ] ); /* Get jacobian at the point of linearization */ J_at(a,b,c) := at( J(x1,x2,u), [x1=a, x2=b, u=c] ); /* Taylor Series Expansion */ g(x1,x2,u, a,b,c) := f(a,b,c) + J_at(a,b,c) . ( [x1, x2, u] - [a,b,c] ); /* What we need, */ g( x1,x2,u, 1,0,0 ); (%o8) [ x2 + 2*(x1-1) ] [ u ] /* can also try, at another linearization point */ g( x1, x2, u, 10, 2, 0 ); [ cos(2) (x2 - 2) + 20 (x1 - 10) + sin(2) + 99 ] (%o11) [ ] [ (- 12 (x2 - 2)) + u - 8 ] /* g --> A . [x1, x2, u ] + b */ q: list_matrix_entries( g( x1, x2, u, a,b,c ) ); A: coefmatrix( q, [x1, x2, u ] ); b: expand( q - A . [x1,x2,u] );
The following also gives the same result as above. Following is also less error prone as you don’t need to deal with jacobians.
%(i1) f( x1, x2, u ) := matrix( [ x1^2 + sin(x2) - 1 ], [ -x2^3 + u ] ); %(i2) SS( x1, x2, u ) := taylor( f(x1,x2,u), [x1,x2,u], [a,b,c], 1 ); 2 (%o2)/T/ [((- 1) + sin(b) + a ) + (2 a (x1 - a) + cos(b) (x2 - b)) + . . ., 3 2 (c - b ) + ((- 3 b (x2 - b)) - c + u) + . . .] q: list_matrix_entries( SS( x1,x2,u ) ); A: coefmatrix( q, [x1,x2,u] ); b: expand( q - A . [x1,x2,u] );