What’s the point of studying differential equations? Can we not do away with them? I almost never see an application of those as a computer science or an ECE systems student. Is it like the analog systems, we study for the legacy reason. Can we not do away from differential equations. I used to side with these arguments until very recently. But now I have totally changed sides.
In this blog post, I will try and give a snapshot of where in real life differential equations are in use. I will approach differential equations from a control systems point of view. I aim to write a short series on practical control systems. Understanding of differential equations are central to the theoretical understanding of control systems and hacking them to your heart’s content. In particular, how to model a system as a differential equation (or a difference equation for true discrete systems).
Differential equations are a compact way to model the behavior of a system in terms of its state variables. You are probably confused what I mean by state variables. Consider a simple pendulum. Say you perturb the pendulum. This results in a simple harmonic motion. We can model its motion in terms of the , ie. coordinates of the pendulum bob. In this case are state variables.
You will however quickly notice that it is rather hard on your brain to think of the motion in terms of , instead one can decide to model the motion in terms of the angle of the rope with verticle, . In this case, is the state variable.
Equations of Motions
You have probably heard of Newton’s equations of motion or classical mechanics. Although they are used to describe simple motions as one probably did in high school but they get terribly complicated for even moderately larger mechanical systems. I have put up some examples where Newton’s equations can be derived but are terribly time-consuming, easy to make mistakes, intractable, and causes brain-pain ;). This renders Newton’s equations useless from a practical control systems standpoint.
Luckily, there is a reformulation of Newton’s equation by Joseph-Louis Lagrange. Yes, this is the same famous Lagrange. It is known as the Lagrangian Mechanics, but essentially it is just reformulation of classical mechanics which makes analyzing systems as above possible without brain-pain.
Here, I will give a quick intro to it and one example. For the fascinated physics person, I will refer to an interesting youtube series on Lagrange Equations. The equations, I am going to describe are often referred to as Euler-Lagrange equations.
where, Kinetic Energy – Potential Energy. is the state variable. If there are multiple state variables, one needs to go for Euler-Lagrange equations of each state variables, thus giving rise to multiple equations, each of which needs to be satisfied simultaneously.
Example: Simple Pendulum
Let’s consider the simple pendulum with state variable as . And derive the differential equation that describes the base motion of the simple pendulum. See the handwritten notes below. Or see PDF here. Try and derive it yourself, it’s not hard though. Note, g (gravitational acceleration) and l (length of the pendulum) are constants.
Now, this differential equation fully describes the motion of the pendulum. It is compact. Of course to get , one can solve the differential equation, if need be.
More Examples of Euler-Lagrangian equations.
In conclusion, using Euler-Lagrange one can get to the differential equation which describes the system. Such differential equations can be derived for a lot of real systems. For example, the motion of a drone, car, boat, airplane, inverted pendulum, motor-bike etc. Literally, any system that moves in a controlled fashion has a differential equation describing it. This differential equation can be derived by Euler-Lagrange equations. I believe, one can safely forget about Newton’s equation as Euler-Lagrange are a convenient reformulation.
I would recommend the youtube series by Michel Van Biezen if you wish to see Euler-Lagrange equations applied to progressively complicated systems.
Differential Equations to Control Systems
Next logical question is how to design a control system using these differential equations. This is already a rather large topic. Here, I shall give a brief motivation of how the differential equations and control systems are related. More details in future blog posts.
The simplest and most widely used control algorithms are Linear Feedback control. For this, one starts with the system described as:
is a vector of state variables. is the time derivative of state variables. Ignore, for now, we shall get to it in future blog posts in this series. After one derives the Euler-Lagrange equation, typically one would linearize it at particular point using the Taylor Series expansion to make it into standard form of Linear Feedback control. Finally, stability, controllability of the system depends on the Eigenvalues of A (for those aware of Linear Algebra, A need to be a negative definite matrix for the system to be stable). There is a lot of very interesting theory here, hopefully, will cover it in the simplest possible way in future blog posts.
Next, if the system is going towards instability one can set particular values of to make the whole thing stable.
That is really it…!
Hopefully, this post puts things in perspective and sets a base for my future posts on control systems.