Module 3 - Introduction to Linear State-Space Control Theory
#Physics #Engineering #Robotics #Control_Theory #Linear_Algebra #Math
Table of Contents:
A) Introduction to Linear Systems and State-Space Representation (🎦 video 3.1)
Video: https://youtu.be/kQNUpNh6nBc
A.1) Goals of this Module: Linear Systems Overview
This module focuses on:
- Systematic design choices.
- Modeling systems with [[linear systems]] for compact and general representations.
Key Objectives:
- Develop a model that is:
- Rich: Captures robotic system behavior.
- Manageable: Simple enough for practical use.
- Use linear systems for general and effective modeling.
- Linear Systems: Represent dynamic systems compactly and effectively.
- Transition from specific systems (e.g., point masses) to state-space representations.
A.2) Point Mass Dynamics (A Simple Robot)
A.2.1) Initial Representation: using derivatives to model our System
- Robot Position:
- Control Input:
- Acceleration:
A.2.2) State-Space Transformation
- Define state variables:
(position). (velocity).
- Dynamics of state variables:
Now we can rewrite our model, in State-Space Form:
-
State vector (position and velocity):
-
Dynamics in vector form:
-
Dynamics in matrix form:
Try to rewrite
Remember that [[Matrix-vector multiplication]] is a [[linear combination]], and there for we can calculate it like this:
Read more: https://mbernste.github.io/posts/matrix_vector_mult/
- We can rename our output in general as
(e.g., measured position):
Summary:
A.2.2.1) State-Space Generalization
Matrices:
- A: System dynamics.
- B: Input matrix.
- C: Output matrix.
General State-Space Equations:
- State Dynamics:
- Output:
A.3) Example: 2D Point Mass (model with State-Space Equations)
System Description
- Position:
and . - Inputs: Accelerations
and .
Extended State Variables
-
(position in -direction). -
(velocity in -direction). -
(position in -direction). -
(velocity in -direction). -
(controlled acceleration in -direction). -
(controlled acceleration in -direction). -
(position in -direction - observed output). //same as x_1 -
(position in -direction - observed output). //same as x_3
Write the matrices:
A.3.1) State-Space Matrices
- State vector:
- Dynamics:
- Output:
Again, these are our [[General State-Space Equations]]:
A.4) Conclusion: LTI Systems and Dimensions
[[Linear Time-Invariant (LTI) Systems]]: Foundation for analyzing robotic control systems.
General Dimensions
- State Vector (
): .
- Input Vector (
): .
- Output Vector (
): .
Matrix Dimensions
: (State dynamics). : (Input mapping). : (Output mapping).
Dimensional Validations
State-space representation:
- Compact and general.
- Allows for systematic analysis.
The A, B, and C matrices encapsulate system specifics.
[[Linear Time-Invariant (LTI) Systems]]:
- Foundation for analyzing robotic control systems.
Next lecture: Origins and derivations of these systems.
B) State-Space Models (Linear vs Non-Linear Systems) (🎦 video 3.2)
Video: https://youtu.be/W6AUOyj5bFA
B.1) Recap: Linear Time-Invariant (LTI) Systems in State-Space Form
This lecture explores, the generality of the LTI model:
Key components:
- State (
): Describes the current system behavior. - Input (
): Controls or influences the system. - Output (
): Observables or measurements of the system.
System Matrices:
: Encodes the system's inherent dynamics (e.g., physics). : Describes how inputs influence the system (e.g., actuators). : Describes how outputs are measured (e.g., sensors).
Note: the system dynamics
Question: How do we select the input
B.2) Revisiting the Car Model in State Space Form
B.2.1) Example1: Designing a Cruise Controller (velocity measurement)
Remember from [[Module 1 - Introduction to Controls and Dynamical Models]]
Dynamic Equation: $$
\dot{v} = \frac{c}{m} u - \gamma v
A = -\gamma, \quad B = \frac{c}{m}, \quad C = 1
\dot{v} = \frac{c}{m} u - \gamma v
x = \begin{bmatrix} p \ v \end
\dot{x} = \begin{bmatrix} 0 & 1 \ 0 & -\gamma \end{bmatrix} x + \begin{bmatrix} 0 \ \frac{c}{m} \end{bmatrix} u
y = \begin{bmatrix} 1 & 0 \end{bmatrix} x
\ddot{\theta} = -\frac{g}{L} \sin(\theta) + C U
\delta x = \begin{bmatrix} \delta\theta \ \delta\dot{\theta} \end{bmatrix}
$$
- Dynamics (around a small
): - Output (
Pendulum angle):
B.4) Example: Two Robots on a Line (Swarm robotics)
And we can control the velocities of these robots,
-
State Variables:
: Position of Robot 1. : Position of Robot 2.
-
Dynamics:
-
Output (we measure the positions of each robot):
B.4.1) The Rendezvous Problem
- Objective: Robots meet at the same position.
- Control Law:
- Closed-Loop Dynamics:
- Outcome: Robots move toward each other and meet.
Note: we will discuss this problem further when we introduce [[Stability of Linear Systems]] and the [[Consensus Equation]].
B.5) Unicycle Robot (non-linear system)
- Non Linear Dynamics:
B.5.1) Challenges - small angle is not a silver bullet
Small Angle Approximation (dumb idea, but let's try it)
.
- Nonlinear terms remain (e.g.,
). - Simplification does not lead to a linear system.
B.6) Key Takeaways: LTI and Non Linear Models
- LTI Models:
- General representation of dynamic systems.
- Compactly encode system dynamics with
, , matrices.
- Simplifications:
- Approximations (e.g., small angles) may not always yield valid LTI models.
- Next Steps:
- Develop systematic approaches for deriving LTI models from [[nonlinear systems]].
C) Producing Linear Models from Nonlinear Systems (🎦 video 3.3)
Video: https://youtu.be/Fg7Vb3haACk
C.1) Introduction: Linearizations
Linearization: The process of creating linear models from nonlinear systems.
Analogy: Classifying systems as "linear" and "nonlinear" is like classifying objects as "bananas" and "non-bananas":
- In other words, most systems are nonlinear, but many behave like linear systems around specific operating points.
Goal:
- Generate linear models from nonlinear state-space models
- Create local descriptions of these nonlinear systems around operating points.
Let's see how to do this.
C.2) Process of Linearization
Goal:
- Generate linear models from nonlinear state-space models
- Create local descriptions of these nonlinear systems around operating points.
Here is a [[non-linear model]],
where:
, is a non linear function is also a non linear function
C.2.1) Find a "local description" around a operating point
-
Define Operating Points: Choose points where the system operates (e.g., pendulum straight down or up).
- The actual state is the operating point (
) plus a small deviation ( ).
- The actual state is the operating point (
- The control input is the nominal operating input point (
) plus a small deviation ( ). $$ u = u_o + \delta u $$
- Linearize the Model:
- The new equations of motion become: $$ \delta \dot{x} = \dot{x} - \dot{x_o} = \dot{x} - 0 = f(x_o + \delta x, u_o + \delta u)$$
- Taylor expand
around :
Remember from: [[Module 1 - Introduction to Controls and Dynamical Models]]
Read More the [[Taylor Series]] here
So, with the following assumptions, we can simplify,
There, our Linear Approximation becomes,
- Where the [[Jacobians]] are,
- Similarly for output:
- Where the [[Jacobian]] is,
- Result:
- Linearized system: $$ \delta \dot{x} = A \delta x + B \delta u, \quad \delta y = C \delta x $$
Summary:
Now, we can compute the [[Jacobians]],
C.3) Example 1: Inverted Pendulum
C.3.1) System Description (Inverted Pendulum)
- Dynamics: $$
\ddot{\theta} = \frac{g}{L} \sin(\theta) + u \cos(\theta) - Output (the angle of the pendulum):
Therefore, the [[non-linear system]], can be written like this,
C.3.2) Linearization (Inverted Pendulum) - good model!
Pick an operating point, for example, when the pendulum is straight and balanced,
-
Operating Point:
, , .
-
Linearized Matrices ([[Jacobians]]):
Let's compute
Let's compute
Let's compute
After computing the [[Jacobians]], we get,
Now we have a Linearized system around the Operating Point:
we can expand this model like this,
finally,
Interpretation:
-
: is the angular velocity, (as expected from the state representation). -
, this is the linearized second-order equation of motion. - This term shows that:
- The pendulum's angular acceleration (
) depends on: (the angular displacement) scaled by . (the input control).
- The pendulum's angular acceleration (
- This term shows that:
C.4) Example 2: Unicycle Robot (non-linear system)
C.4.1) System Description (Unicycle Robot)
- Dynamics:
- State Variables:
In other words,
- Inputs to the Robot:
- Output that we can measure (thanks to [[odometry]]):
C.4.2) Linearization (Unicycle Robot) - it still doesn't give us a good model
-
Operating Point:
- straight at the origin, looking at the x direction:
, , , - without moving:
, .
- straight at the origin, looking at the x direction:
-
We then obtain the Linearized Matrices:
And if we use this matrices for our Linearized system around the Operating Point, we get this,
Which is really weird, since the small change in velocity in the y direction(
Understanding the Result:
-
: - This implies that small changes in the control inputs (linear velocity
or angular velocity ) have no direct effect on . - Why?
The model is linearized at a point where the robot is not moving, and the heading angle. In this scenario: - Motion in the
-direction arises from the coupling between angular motion ( ) and linear velocity ( ). - Since
, changes in do not result in -direction motion during the linearized approximation.
- Motion in the
- This implies that small changes in the control inputs (linear velocity
-
Physical Interpretation:
- At the operating point:
- Small changes in the linear velocity
only affect motion along the -axis. - Small changes in the angular velocity
result in changes in orientation , but without coupling to in the linearized model.
- Small changes in the linear velocity
- This means that in this linearized framework, the unicycle cannot directly move in the
-direction.
- At the operating point:
-
Limitations of the Linearization:
- The linearized model loses important nonlinear dynamics, such as how angular velocity
over time contributes to motion in the -direction via a curved trajectory. - Reality: If the robot rotates (nonlinear behavior), its orientation would change, leading to movement in the
-direction. However, this is not captured in the linearized dynamics around the rest state.
- The linearized model loses important nonlinear dynamics, such as how angular velocity
Conclusion:
This linearized model is valid at the specific operating point, but it does not mean that the robot is fundamentally incapable of moving in the
- Linearization does not fully capture nonlinear dynamics.
- Example Issue:
- Linearized model suggests no control over
direction at . - Nonlinear system can achieve motion in
by turning and driving.
C.5) Key Takeaways: Linearization Process
-
Linearization Process:
- Approximate nonlinear systems around operating points.
- Resulting models are simpler but only valid locally.
-
Challenges:
- Linearization may fail to capture essential nonlinear behaviors (e.g., unicycle model).
-
Practical Use:
- Linear models provide insights and simplify analysis.
- Lessons from linear models often apply to nonlinear systems with modifications.
-
Next Steps:
- Explore systematic methods for deriving useful linear models.
D) Behavior and Solutions of Homogeneous LTI Systems (doing the Math) (🎦 video 3.4)
Video: https://youtu.be/gzNy54XDur8
D.1) Introduction: Studying LTI System Behavior
In this lecture, we focus on understanding how [[LTI systems]] behave by finding their solutions and examining their dynamics.
The solutions we derive will allow us to talk about the system's behavior systematically.
Starting Point: Simplifying the System: We begin by simplifying the system,
- Ignoring input and output,
- we focus purely on the system's inherent behavior, described by:
- Assume an initial condition
at some starting time .
This setup represents the physical dynamics of the system without external interference.
It is also known as a [[homogeneous linear system]], as there is no external forcing term.
D.2) Case 1: Using Scalars to Building Intuition
Let’s first analyze a scalar version of the [[homogeneous system]], where
Note: this is a [[first-order linear differential equation]], and to solve it we can notice it is a [[separable differential equation]], so we apply that method to find a solution.
Characteristics of an ODE:
-
Definition: An ODE involves functions of a single independent variable (e.g.,
) and its derivatives. -
Order: The order of an ODE is the highest derivative present in the equation.
-
Linear: An ODE is linear if the dependent variable (
) and its derivatives appear only to the first power, and there are no products of and .
Example: the equation,
Is a [[first-order linear ordinary differential equation]].
- In this case, the independent variable is
, and is the dependent variable. - Here,
(or ) is the first derivative of , so it is a first-order ODE. - This equation is linear because
appears as , with no powers or products involving . - Separable: The equation can be solved by separating the variables
and (dependent and independent variables).
- This equation is linear because
Video: https://youtu.be/ccRJtV6XWQE
Video: https://youtu.be/irtzsCr6k8M
Other methods: https://youtu.be/NLYpSMpSuMg
The solution to this equation is:
where:
is the initial condition at . is the scalar exponential.
For the differential equation,
We can see this is a first-order linear separable ordinary differential equation. The method used to solve it is the separation of variables. Here’s how it is solved step by step:
Step 1: Write the equation
Rewriting the equation in differential form:
Step 2: Separate the variables
Using the separation of variables method, rearrange the equation to isolate terms involving
Step 3: Integrate both sides
Integrate both sides with respect to their respective variables:
The left-hand side gives:
The right-hand side gives:
So:
Step 4: Solve for
Exponentiate both sides to eliminate the natural logarithm:
Using the properties of exponents:
Let
Step 5: Apply the initial condition
If the initial condition
Solve for
Substitute
Final Solution
Where:
is the initial condition at , represents the exponential growth (if ) or decay (if ).
D.2.1) Verifying the Solution of the ODE
To confirm this solution:
-
Check the initial condition:
Substitute: This satisfies the initial condition.
-
Check the dynamics:
Compute the time derivative of: Remember...
Video: https://youtu.be/m2MIpDrF7Es
Then, substituting back:
This matches the original equation
Thus, the solution is valid. ✅
D.3) Case 2: Solving the Matrix Case (Extending to Higher Dimensions)
For systems where
The solution remains similar but uses a [[matrix exponential]]:
Here,
This extends the scalar exponential to matrices.
D.3.1) Why Matrix Exponentials Work
Definition of the exponential as a [[Taylor Series]] expansion:
- For a scalar exponential:
- For a matrix
:
The [[matrix exponential]] allows us to generalize the behavior of exponentials to multidimensional systems.
Especially, we care about the derivative, we can do Term-by-Term Differentiation to this Infinite Series,
Note1: take the derivative with respect to
(use the [[power rule]])
Note2: We can Re-indexing the summation to adjust the index and simplify the series.
Important observations:
- The time derivative of the [[matrix exponential]] behaves like a [[scalar exponential]]:
- Identity at
, it simplifies to the identity matrix:
D.3.2) State Transition Matrix (fancy name for the Matrix Exponential)
The matrix exponential
So we can rewrite our differential equation, using this new definition,
We can use
or in general, use
Just as before, the properties of the [[matrix exponential]] still apply,
D.4) Including Inputs: Find the General Solution for a Controlled System
For systems with inputs:
The solution incorporates the input's influence:
Key Terms:
: The [[state transition matrix]]. : The effect of the input on the state over time. Also known as a [[Convolution Integral]].
D.4.1) Validation of General Solution
-
Check Initial Condition:
At, the integral term vanishes: This satisfies the initial condition.
-
Check Dynamics:
Differentiating: - The first term contributes
.
- The second term,
requires us to use the [[Leibniz's Integral rule]] for differentiation under the integral sign:
After simplification,
this matches the equation
.
- The first term contributes
Thus, the solution is valid. ✅
D.5) Using the General Solution for the Output Equation
General LTI System:
- State dynamics:
with General Solution,
y = Cx
x(t) = \Phi(t, t_0) x(t_0) + \int_{t_0}^t \Phi(t, \tau) B u(\tau) , d\tau
\Phi(t, t_0) = e^
y(t) = Cx(t)
x(t) = \Phi(t, t_0) x(t_0) + \int_{t_0}^t \Phi(t, \tau) B u(\tau) , d\tau
\dot{x} = ax
x(t) = e^{at} x(0)
x(t) \to 0 \quad \forall x(0)
$$
- [[Instability]]:
- The system is unstable if the initial condition that blows up the system exist:
- The system is unstable if the initial condition that blows up the system exist:
- [[Critical Stability]]:
- The system is critically stable if:
neither blows up nor converges to zero
- The system is critically stable if:
E.4) Generalizing to Matrices: how do we determine stability?
Systems of the Form (with no controller input)
remember, these system have the solution,
- Here,
is a vector, and is a matrix. - For matrices, we can’t just say
or because those terms don’t apply to matrices. - Instead, we use [[eigenvalues]]! but why?!
E.4.1) Recap: What are Eigenvalues?
For those of you who don't know/or do not remember what [[eigenvalues]] are.
What Are Eigenvalues?
- [[Eigenvalues]] (
) are special numbers associated with matrices that help describe how matrices transform vectors. - If you have a matrix
and a vector , and their product can be expressed as: - This means that applying
to scales the vector by the factor (the eigenvalue). - The vector
is called an [[eigenvector]], and is the corresponding [[eigenvalue]].
- This means that applying
Matrix Exponentials: https://youtu.be/O85OWBJ2ayo
Diagonalization: https://youtu.be/yJ3EfoJmTFg
Diagonalization: https://youtu.be/jO-Upx5dfmQ
Key Points About Eigenvalues:
-
Nature:
- [[Eigenvalues]] can be complex numbers (not limited to real numbers).
- They generalize the concept of scaling factors for transformations in higher dimensions.
- [[Eigenvalues]] can be complex numbers (not limited to real numbers).
-
Intuition:
- [[Eigenvalues]] describe how the matrix
acts in the directions of its [[eigenvectors]]. - Along the [[eigenvector]] direction, the matrix scales the vector by
. - You can think of [[eigenvectors]] as "directions" and [[eigenvalues]] as "magnitudes" in those directions.
- [[Eigenvalues]] describe how the matrix
Why Are Eigenvalues Important?
- They are fundamental tools for understanding matrix behavior in systems.
- In many cases, matrix systems can be thought of as a combination of scalar systems in the directions of eigenvectors.
How to Compute Eigenvalues?
- If you’re using computational tools, finding eigenvalues is straightforward:
- MATLAB: Use the
eig(A)
command to compute the eigenvalues of a matrix.
- Python (NumPy): Use
np.linalg.eig(A)
. - Most programming languages or libraries for linear algebra have similar commands.
- You can also get the [[eigenvalues]] by solving the [[characteristic equation]]
- MATLAB: Use the
E.4.2) Example: Matrix Systems (Saddle Points and Instability)
For the system:
we know the solution is:
where:
We also know that the matrix exponential is given by the infinite series:
Now...
Computing
But...
There is one category of
These are the [[diagonal matrices]].
Properties of Diagonal Matrices:
There are some interesting properties of diagonal matrices.
First, recall the definition of the [[eigenvectors]] and [[eigenvalues]] of a matrix:
which can be rewritten as:
where the [[eigenvalues]] are calculated using:
It turns out that for [[diagonal matrices]], we can always find the [[eigenvalues]] in their columns with their corresponding [[eigenvectors]].
For example, consider the following [[diagonal matrix]]
where
is the eigenvector associated with
is the eigenvector associated with
How does this property help us?
Well, consider the same [[diagonal matrix]]
As part of our model,
The computations simplify to the form:
We can notice that these polynomials are special since they fit the definition:
Therefore, we can simplify even further and say:
Note: This analytical solution will save us time from computing the infinite series.
Visualize this solution:
which simplifies to,
this creates the Saddle Point in the [[Phase portrait]],
Source here
This solution gives us the following intuition:
- The eigenvalues of the diagonal matrix
( and ) directly control the behavior of the system, as seen in the exponential terms and . - Diagonal matrices simplify matrix exponentials because there is no need for off-diagonal computations, reducing computational overhead significantly.
The Importance of Diagonal Matrices
Since we now see the beauty and importance of the diagonal matrix and how it helps us compute the solution efficiently, we should aim to make all our systems resemble a diagonal matrix.
A nice trick: Change of Basis
To transform a matrix, we can use the following trick:
Let’s say there exists an invertible matrix
where:
: The original "ugly" matrix. : The matrix with the change of basis (constructed from the eigenvectors of ). : A "pretty" diagonal matrix containing the eigenvalues of .
How it works - Steps:
- Change of Basis: Use
to transform into a diagonal matrix . The matrix aligns the column vectors with the eigenvectors of . - Scaling: Scale the column vectors of
by the corresponding eigenvalues (these form the diagonal entries of ). - Return to Original Basis: Transform back to the original basis using
.
Watch a visualization here
Extending the Trick to Power Matrices
What is nice about this trick is that it extends to power matrices.
If:
then:
which simplifies to:
By induction, we can generalize:
This highlights the computational advantage of diagonalization, as
Simplifying
The nice thing is that now we can rewrite
Substituting
By matrix multiplication properties (associativity and commutativity), this simplifies to:
Recognizing the term in parentheses as
General Case for
If
Thus, the final result is:
General Solution for the System
For the system:
the solution is:
where:
If
where:
and:
where each column
Key Insights:
- Simplification via Diagonalization: Diagonalization reduces computational complexity, making operations like powers and exponentials of matrices easier.
- Efficient Computation: Computing
for diagonalizable is straightforward, leveraging the exponential of a diagonal matrix. - System Dynamics: The eigenvalues of
directly control the behavior of the system, such as growth, decay, or oscillations, as reflected in terms like .
Video: https://youtu.be/jO-Upx5dfmQ
Video: https://youtu.be/yJ3EfoJmTFg
Video: https://youtu.be/EJG6gBeVdfw
E.4.3) Example: Matrix Systems (Imaginary Eigenvalues and Center Fixed Points)
Pendulum
Source here
Inverted Pendulum: unstable
E.4.4) Stability Conditions in Matrix Systems (using Eigenvalues)
For the system,
Conditions:
- Asymptotic Stability:
- The system is asymptotically stable if and only if:
- The system is asymptotically stable if and only if:
- Instability:
- The system is unstable if:
- The system is unstable if:
- Critical Stability:
- The system is critically stable if:
, (at least one eigenvalue has ). - OR
- two purely imaginary eigenvalues and the rest have negative real part.
- The system is critically stable if:
E.5) Key Takeaways: Stability of Linear Systems
- Stability depends on the eigenvalues of
: - Asymptotically Stable:
. - Unstable:
. - Critically Stable: Eigenvalues have
, with at least one on the imaginary axis.
- Asymptotically Stable:
-
Eigenvalues give insight into system behavior:
- Negative real parts: Stable behavior.
- Positive real parts: Unstable behavior.
- Zero or purely imaginary eigenvalues: Critical stability.
-
Design Goal: Ensure the closed-loop system has eigenvalues with negative real parts to guarantee stability.
F) Swarm Robotics Control: The Consensus Equation (🎦 video 3.6)
Video: https://youtu.be/nTclZAoQ7c0
F.1) Recap: Stability and Eigenvalues
From the previous lecture, we learned that [[eigenvalues]] play a fundamental role in understanding the [[stability]] properties of linear systems.
Specifically:
- If all eigenvalues have strictly negative real parts, the system is asymptotically stable.
- If any eigenvalue has a positive real part, the system becomes unstable.
- If eigenvalues are purely imaginary or zero, the system may exhibit critical stability.
The focus today is to use these ideas to address a fundamental problem in swarm robotics:
The Rendezvous Problem.
More info here
F.2) The Rendezvous Problem in Swarm Robotics
Problem Statement:
In swarm robotics, we have a collection of mobile robots that:
- Measure relative displacement of their neighbors (e.g.,
). - Lack global positioning information, meaning they do not know their absolute positions.
Goal:
Achieve rendezvous: make all robots meet at the same position without pre-specifying the meeting point.
Challenge:
Robots do not know their global location (e.g., "the origin"). Instead, they must rely on local relative measurements of their neighbors.
F.2.1) The Two-Robot Case (check for stability)
For two robots (
- Define control laws (the agents simply aim towards each other):
-
Write the system dynamics:
-
Eigenvalues of
: - Using MATLAB (or similar), we find:$$
\lambda_1 = 0, \quad \lambda_2 = -2.- One eigenvalue is 0.
- The other eigenvalue is negative.
- Using MATLAB (or similar), we find:$$
- Stability Analysis:
- A zero eigenvalue suggests [[critical stability]].
F.2.1.1) Null Space of "A" (the rendezvous meeting point)
Did you know?: The critically stable system does not converge to the origin but instead to the null space of
Video: https://youtu.be/uQhTuRlWMxw
Let's recap....
The null space of
It means, that the [[null space]] of
So, when our system is,
- If
, then , so this component remains constant. - If
has a negative real part, then as
Thus, all components of
Example: The Two-Robot Case
For two robots (
with
Interpretation: the states
- Both robots (
) converge to the same position ( ). - The difference between the robots (
) approaches 0.
Thus, rendezvous is achieved, even though the final position (
F.2.2) Extending to Multiple Robots
General Control Law:
For multiple robots, let each robot
Stacked System:
Stacking all robot positions into a vector
the system can be written as:
where
Graph Laplacian Properties:
has: - One zero eigenvalue (
) if the graph is connected. - All other eigenvalues are positive (
).
- One zero eigenvalue (
has: - One zero eigenvalue (
). - All other eigenvalues are negative (
). - Therefore the multiple-robot case has Critical Stability
- One zero eigenvalue (
F.2.2.1) Null Space of "L" (the rendezvous meeting point)
The null space of
Key Insight:
- All agents converge to the same value (
). - This ensures consensus (e.g., all robots agree on position).
The corresponding system is critically stable, as
F.3) Real-World Implementations
Practical Example:- Two robots executing the algorithm in real-time
- Robots align their movement toward consensus.
- The state of the system asymptotically approaches the null space of the system matrix.
Control Design: Use a simple PID go-to-goal controller:
- Robots share relative positions.
- Consensus equations define intermediary goal points.
Key Result:
- The robots successfully achieve rendezvous and demonstrate the effectiveness of the consensus equation.
Other Simulation: In simulation, robots:
- Navigate their environment.
- Avoid obstacles.
- Achieve rendezvous or other formations.
F.4) Conclusion: Solution to the Rendezvous Problem
-
The rendezvous problem is solved using:
-
Critical stability ensures convergence to the null space of
.
More info here
- The [[consensus equation]] can be extended to solve more complex multi-robot tasks, including:
- Splitting into subgroups.
- Avoiding obstacles.
- Discovering missing agents.
By building on these foundations, you can design sophisticated algorithms for swarm robotics and network control systems.
G) Attempting Linear System Stability with Output Feedback Control: Eigenvalue Analysis (🎦 video 3.7)
Video: https://youtu.be/HmqOnsRH73w
G.1) Recap: Stability and Eigenvalues
[[Stability of a linear system]] depends on the eigenvalues of its system matrix.
- Asymptotic Stability: All eigenvalues must have strictly negative real parts.
- Critical Stability: At least one eigenvalue has zero real part, and the rest have non-positive real parts.
- Instability: Any eigenvalue with a positive real part causes the system to "blow up."
Today, we aim to achieve asymptotic stability by designing a feedback controller.
G.2) State-Space System and Output Feedback
We start with a linear system in state-space form:
where:
: State vector (e.g., position and velocity of a robot), : Control input, : System output.
Goal:
Design a feedback controller that uses the output
G.3) Example1: World's Simplest Robot
We revisit the simple robot system, which is a point mass on a line:
with the states being,
remember the State Dynamics written in the form,
then we can obtain the matrices,
which is simply,
State Variables:
: Position of the robot. <---- is the output : Velocity of the robot. : Acceleration (control input).
Now our job is to try to find a way to connect
G.3.1) Idea 1: Create a Simple Position-Based Controller (part1)
Position-Based Control:
Our goal is to stabilize the system, meaning we want to drive the state of the robot to zero, or equivalently, to the origin.
- If the position of the robot (
) is negative (i.e., the robot is on the left side of the origin), we should apply a positive input ( ) to push the robot to the right. - Conversely, if
is positive (i.e., the robot is on the right side of the origin), we should apply a negative input ( ) to push the robot to the left.
Behavior:
- When
, : The robot moves toward the origin from the left. - When
, : The robot moves toward the origin from the right.
in other words,
The control input
In general, this simple proportional controller can be written mathematically as,
where
Note:
G.3.1.1) How does this Output Feedback change the System Dynamics?
Recall the system is,
By applying the control law
-
Recall that the output is:
-
Rewriting the control law
in terms of : in terms of
, -
Substituting
into the state equation:
gives:
- Defining the new system matrix:
Our next task is to analyze the eigenvalues of
Or in other words...
our task is to choose
G.3.2) Idea 1: Create a Simple Position-Based Controller (part2)
Recall the system is,
and now with output feedback, the system is,
in other words,
Remember that for our simple robot, we chose the Control Law:
where
Substituting
So, our robot system is now,
The new system matrix becomes:
G.3.2.1) Eigenvalue Analysis (Problem with Output Feedback)
To analyze stability, compute the eigenvalues of
The eigenvalues are:
where
Interpretation:
- The eigenvalues are purely imaginary (
). The real part is 0. - The system is critically stable, meaning it will oscillate indefinitely.
Problem with Output Feedback
Although
- When moving away from the origin, the control input
pushes toward the origin (correct behavior). - When moving toward the origin, the control input
continues to push, causing overshoot (oscillatory behavior).
Insight:
We need to consider the full state (
G.4) A Better Solution: Pole Placement with Full State Feedback
Instead of output feedback (
where
Substituting into the system:
with:
Eigenvalue Placement:
By choosing
Video: https://youtu.be/FXSpHy8LvmY
G.5) Next Steps: Design State Feedback Gain and Estimate with State Observers
-
In the next lecture, we will:
- Design state feedback gains (
) to place the eigenvalues of in desired locations. - Ensure the system is asymptotically stable.
- Design state feedback gains (
-
For cases where the full state
is not directly measurable: - In Module 4 we will explore methods to estimate
from (e.g., using [[state observers]]).
- In Module 4 we will explore methods to estimate
H) Stabilizing the System Using State Feedback (🎦 video 3.8)
Video: https://youtu.be/_wwIP-9_sMo
H.1) Recap: Output feedback control and Stability
The key insight from last time was that to stabilize the system, we need the real parts of the eigenvalues to be strictly negative.
However, we couldn't achieve this using only output feedback because it relied solely on the output
The eigenvalues are:
where
Today, we assume access to the full state
Let's get started!
H.2) State Feedback Control Law
- The state-space representation of the system is:
where
- We propose the following state feedback control law:
where
-
Substituting
into the system dynamics: simplifies to:
-
The new system matrix:
is called the [[closed-loop dynamics matrix]].
Our task is to design
H.3) Example: State Feedback for the Simplest Robot
Let's revisit the simplest robot system, which is a point mass on a line:
with the states being,
remember the State Dynamics written in the form,
then we can obtain the matrices,
Its dynamics are:
Here,
: Position : Velocity
The control input
H.3.1) Designing the Feedback Gain
For this system:
, .
If we set:
then the closed-loop dynamics matrix becomes:
Simplifying:
in other words, closed loop dynamics of the system is now the following,
H.3.2) Analyzing the System with Specific Gains (Trial-and-Error)
In Module 4, we will pick gains in a systematic manner, but for now let's do trial & error,
Case 1:
For
The eigenvalues of this matrix can be computed (e.g., using MATLAB or by solving the [[characteristic equation]]) and are:
- The real part of both eigenvalues is
, which is strictly negative, so the system is asymptotically stable!!!!! - The imaginary part (
) indicates the presence of damped oscillations in the system's response.
Case 2:
Now, let’s reduce
You can almost think of this as a [[PD regulator]] because
The closed-loop matrix becomes:
The eigenvalues of this matrix are:
- Both eigenvalues are real and strictly negative, so the system is asymptotically stable!!!!!
- No imaginary components mean there are no oscillations in the system response!!
- However, the response becomes slower compared to the first case......
H.3.2.1) Key Observations: Eigenvalues and Behavior
-
Eigenvalues Determine Behavior:
- Negative real parts: System is stable.
- Imaginary components: Oscillations are present.
- Smaller negative real parts: Slower response.
- Larger negative real parts: Faster response.
-
Trade-offs in Design:
- Increasing
and reduces oscillations but may slow down the system. - Decreasing
and can speed up the response but might introduce instability or excessive oscillations.
- Increasing
-
Dimensional Analysis:
- For
and , the feedback gain must have dimensions .
- For
H.4) Next Steps: Goals for Module 4
In the next module:
- We’ll explore systematic methods to choose
to achieve desired eigenvalues for the closed-loop dynamics. - We’ll address how to estimate
from , enabling state feedback in scenarios where only output measurements are available.
For now, we have seen that state feedback control is a powerful method for stabilization, provided we have full access to the system state
I) Glue Lecture 3 - Systems and State-Space Representation (🎦 video 3.9)
I.1) Systems Overview - What is a System?
In this course, we define systems by how inputs relate to outputs.
This is represented by the state-space form:
where:
is the state vector, is the input, is the output, are system matrices.
The goal is to:
- Understand inputs and outputs and their relationship to the system.
- Convert a second-order system into state-space form.
- Linearize a nonlinear second-order system.
I.2) Example 1: Converting a Linear Second-Order System to State-Space Form
Given a [[second-order differential equation]]:
in general we can see that:
is the output (e.g. position), is the input (e.g. a control force).
I.2.1) Step 1: Choose State Variables & define Inputs and Outputs
We select:
(position), (velocity).
I.2.2) Step 2: Rewrite Second-Order as a pair of First-Order Equations
Using the definitions of
in other words,
I.2.3) Step 3: Represent in State-Space Form
The equations can be expressed as:
where:
The output equation
where:
Thus, the system is represented in state-space form:
I.3) Example 2: Linearizing a Nonlinear System
Consider a [[nonlinear second-order differential equation]]:
Now we need linearize it around an equilibrium point
I.3.1) Step 1: Choose State Variables & define Inputs and Outputs
We select:
(position), (velocity).
I.3.2) Step 2: Rewrite Second-Order as a pair of First-Order Equations
Using the state variables:
in other words,
I.3.3) Step 3: Linearize Around an Operating Point (use the Jacobian Matrix)
At the operating point
where,
Notice: the nonlinear term
we do the same with
The linearized equations become:
I.3.4) Step 4: Represent in State-Space Form
Now we have a Linearized system around the Operating Point:
where:
and
we can expand this model like this,
I.4) Key Insights about Module 3
-
State-Space Representation:
- Convert second-order differential equations into first-order form.
- Identify the matrices
for state-space representation.
-
Linearization:
- Linearize nonlinear systems around a specific operating point.
- Compute the
-matrix by taking partial derivatives with respect to the state variables.
-
Matrix Dimensions:
- Ensure consistency in dimensions when setting up
.
- Ensure consistency in dimensions when setting up
J) Programming and Simulation: Go-to-Goal Controller (🎦 video 3.10)
Video: https://youtu.be/5ZFk8MJsJeg
J.1) Overview: Go-to-Goal Controllers
This lecture focuses on Go-to-Goal (GTG) Controllers, used to steer mobile robots from Point A to Point B.
In this week’s programming assignments, you will implement a PID-based Go-to-Goal controller.
This involves:
- Implementing Proportional (P), Integral (I), and Derivative (D) terms of the PID controller.
- Adjusting the gains for optimal performance.
Key Notation:
-
Robot's state:
- Position:
- Orientation:
(angle with respect to the -axis)
- Position:
-
Goal's state:
- Position:
- Position:
-
Vector to Goal:
: Vector from the robot to the goal. - Orientation of
: (angle with respect to the -axis)
J.2) Controller Design (Go-To-Goal)
- The linear velocity of the robot is kept constant.
- The angular velocity (
) is computed using a PID controller to steer the robot towards the goal.
H.3) PID Controller Implementation (Key Variables)
- Memory for PID terms:
accumulated_error (E_k)
: Tracks the total error for the Integral term.previous_error (e_k_1)
: Tracks the error from the previous step for the Derivative term.
- Gains for PID terms:
kp
: Proportional gain.ki
: Integral gain.kd
: Derivative gain.
J.3.1) Skeleton Code for execute
Function
The execute
function in GoToGoal.m
is responsible for:
- Calculating the heading to the goal and the error between the robot’s orientation (
) and the desired heading ( ). - Computing the PID terms and combining them to calculate
(angular velocity). - Saving the error values for the next step.
Code Block:
function [v, w] = execute(obj, x, y, theta, xg, yg, linear_velocity)
% 1. Compute the heading to the goal
theta_g = atan2(yg - y, xg - x); % Angle to goal
error = theta_g - theta; % Orientation error
% Normalize the error
error = atan2(sin(error), cos(error));
% 2. Compute the PID terms
proportional_term = obj.kp * error;
% Integral term (update accumulated error)
obj.accumulated_error = obj.accumulated_error + error;
integral_term = obj.ki * obj.accumulated_error;
% Derivative term (calculate difference with previous error)
derivative_term = obj.kd * (error - obj.previous_error);
% 3. Compute angular velocity
w = proportional_term + integral_term + derivative_term;
% 4. Save errors for next step
obj.previous_error = error;
% Output linear velocity and angular velocity
v = linear_velocity;
end
J.4) Demo: PID Controller Behavior
J.4.1) Desired vs. Actual Behavior (Output Graph)
Example graph output:
- Small oscillations are acceptable, but they should dampen over time.
- Blue line should nearly match the red line
- Red dashed line: Desired orientation (
) to the goal. - Blue solid line: Actual orientation (
) of the robot.
Key objectives:
- Minimize the difference between the blue and red lines.
- Adjust gains to:
- Minimize overshoot.
- Reduce oscillations.
- Ensure stability.
J.4.2) How to Stop the Robot at the Goal: Stop Condition and Goal Adjustment
- Stopping Condition:
- Robot stops when it gets near the goal.
- This condition is included in the provided code.
- Adjusting Goal Position:
- Modify the
goal
variable in the Constructor of theSupervisor
class:obj.goal = [-1, 1]; % Example: Set goal at (-1, 1) in meters
- Modify the
J.5) Summary: Mobile Robot Go-To-Goal Controller - PID
- Implementing PID:
- Implement all three PID terms in the
execute
function. - Use memory variables to track errors for the Integral and Derivative terms.
- Implement all three PID terms in the
- Testing and Optimization:
- Test your controller with the provided stop condition and adjustable goal.
- Tune the PID gains (
kp
,ki
,kd
) to achieve minimal overshoot and smooth performance.
- Expected Results:
- The robot should steer smoothly to the goal with minimal oscillations and overshoot.
- The angular velocity (
) computed by the PID controller aligns the robot's orientation with the desired goal direction.
K) Hardware Notes (🎦 video 3.11)
Video: https://youtu.be/-jjnv6qkQ_8
Z) 🗃️ Glossary
File | Definition |
---|