Module 3 - Introduction to Linear State-Space Control Theory

#Physics #Engineering #Robotics #Control_Theory #Linear_Algebra #Math

🔙 Previous Part | Next Part 🔜

↩️ Go Back

Table of Contents:


A) Introduction to Linear Systems and State-Space Representation (🎦 video 3.1)

A.1) Goals of this Module: Linear Systems Overview

This module focuses on:

Key Objectives:

  1. Develop a model that is:
    • Rich: Captures robotic system behavior.
    • Manageable: Simple enough for practical use.
  2. Use linear systems for general and effective modeling.
    • Linear Systems: Represent dynamic systems compactly and effectively.
    • Transition from specific systems (e.g., point masses) to state-space representations.

A.2) Point Mass Dynamics (A Simple Robot)

A.2.1) Initial Representation: using derivatives to model our System

A.2.2) State-Space Transformation

  1. Define state variables:
    • x1=p (position).
    • x2=p˙ (velocity).
  2. Dynamics of state variables:

Now we can rewrite our model, in State-Space Form:

Summary:


A.2.2.1) State-Space Generalization

Matrices:

  1. A: System dynamics.
  2. B: Input matrix.
  3. C: Output matrix.

General State-Space Equations:

  1. State Dynamics:x˙=Ax+Bu
  2. Output:y=Cx

A.3) Example: 2D Point Mass (model with State-Space Equations)

System Description

Extended State Variables

  1. x1=px (position in x-direction).

  2. x2=p˙x (velocity in x-direction).

  3. x3=py (position in y-direction).

  4. x4=p˙y (velocity in y-direction).

  5. u1=ux (controlled acceleration in x-direction).

  6. u2=uy (controlled acceleration in y-direction).

  7. y1=px (position in x-direction - observed output). //same as x_1

  8. y2=py (position in y-direction - observed output). //same as x_3

Write the matrices:

A.3.1) State-Space Matrices

Again, these are our [[General State-Space Equations]]:

A.4) Conclusion: LTI Systems and Dimensions

[[Linear Time-Invariant (LTI) Systems]]: Foundation for analyzing robotic control systems.

General Dimensions

  1. State Vector (x):
    • xRn.
  2. Input Vector (U):
    • uRm.
  3. Output Vector (Y):
    • yRp.

Matrix Dimensions

  1. A: n×n (State dynamics).
  2. B: n×m (Input mapping).
  3. C: p×n (Output mapping).

Dimensional Validations

Conclusion

State-space representation:

  • Compact and general.
  • Allows for systematic analysis.

The A, B, and C matrices encapsulate system specifics.

[[Linear Time-Invariant (LTI) Systems]]:

  • Foundation for analyzing robotic control systems.

Next lecture: Origins and derivations of these systems.


B) State-Space Models (Linear vs Non-Linear Systems) (🎦 video 3.2)

B.1) Recap: Linear Time-Invariant (LTI) Systems in State-Space Form

This lecture explores, the generality of the LTI model:

Key components:

System Matrices:

Note: the system dynamics A, are given to us, it is the nature of the system. But B and C are designed by us when selecting the actuators and sensors used.

Question: How do we select the input u to control the output y of the system?


B.2) Revisiting the Car Model in State Space Form

B.2.1) Example1: Designing a Cruise Controller (velocity measurement)

Remember from [[Module 1 - Introduction to Controls and Dynamical Models]]

Dynamic Equation: $$
\dot{v} = \frac{c}{m} u - \gamma v

Where:$v$:Velocity.$u$:Controlinput.$c$:Actuatorconstant.$m$:Mass.$γ$:Dragcoefficient.StateSpaceRepresentation:![](https://i.imgur.com/PWUgMPT.png)Summary:State:$x=v$.Output:$y=Cx=v$.Matrices:

A = -\gamma, \quad B = \frac{c}{m}, \quad C = 1

- - - - #### B.2.2) Example2: Designing a Self-Driving Car (position measurement) **Same Dynamic Equation**:

\dot{v} = \frac{c}{m} u - \gamma v

Where:$v$:Velocity.$u$:Controlinput.$c$:Actuatorconstant.$m$:Mass.$γ$:Dragcoefficient.![](https://i.imgur.com/kkvGnmK.png)Summary:ExpandedState:

x = \begin{bmatrix} p \ v \end

$p$:Position.$v$:Velocity.Dynamics:

\dot{x} = \begin{bmatrix} 0 & 1 \ 0 & -\gamma \end{bmatrix} x + \begin{bmatrix} 0 \ \frac{c}{m} \end{bmatrix} u

Output(position):

y = \begin{bmatrix} 1 & 0 \end{bmatrix} x

- - - ### B.3) Pendulum Example (non-linear models) ![](https://i.imgur.com/cHkq75O.png) - **Nonlinear Dynamics**:

\ddot{\theta} = -\frac{g}{L} \sin(\theta) + C U

- $\theta$: Pendulum angle. - $L$: Length of the pendulum. - $g$: Acceleration due to gravity. - $C$: Torque constant. #### B.3.1) Using Small Angle Approximation For small $\theta$, $\sin(\theta) \approx \theta$. ![](https://i.imgur.com/AhqncQw.png) **State-Space Representation**: ![](https://i.imgur.com/7CPOKK7.png) *Summary*: - State:

\delta x = \begin{bmatrix} \delta\theta \ \delta\dot{\theta} \end{bmatrix}
$$


B.4) Example: Two Robots on a Line (Swarm robotics)

And we can control the velocities of these robots,

B.4.1) The Rendezvous Problem

Coming Soon

Note: we will discuss this problem further when we introduce [[Stability of Linear Systems]] and the [[Consensus Equation]].


B.5) Unicycle Robot (non-linear system)

B.5.1) Challenges - small angle is not a silver bullet

Small Angle Approximation (dumb idea, but let's try it)


B.6) Key Takeaways: LTI and Non Linear Models

  1. LTI Models:
    • General representation of dynamic systems.
    • Compactly encode system dynamics with A, B, C matrices.
  2. Simplifications:
    • Approximations (e.g., small angles) may not always yield valid LTI models.
  3. Next Steps:
    • Develop systematic approaches for deriving LTI models from [[nonlinear systems]].

C) Producing Linear Models from Nonlinear Systems (🎦 video 3.3)

C.1) Introduction: Linearizations

Linearization: The process of creating linear models from nonlinear systems.

Analogy: Classifying systems as "linear" and "nonlinear" is like classifying objects as "bananas" and "non-bananas":

Goal:

Let's see how to do this.


C.2) Process of Linearization

Goal:

Here is a [[non-linear model]],

x˙=f(x,u),y=h(x)

where:

C.2.1) Find a "local description" around a operating point

  1. Define Operating Points: Choose points where the system operates (e.g., pendulum straight down or up).

    • The actual state is the operating point (xo) plus a small deviation (δx).
x=xo+δx
  1. Linearize the Model:

Remember from: [[Module 1 - Introduction to Controls and Dynamical Models]]

So, with the following assumptions, we can simplify,

There, our Linear Approximation becomes,

δx˙Aδx+Bδu A=fx|(xo,uo)$$$$B=fu|(xo,uo) δyCδx C=hx|xo
  1. Result:

Summary:

Now, we can compute the [[Jacobians]],


C.3) Example 1: Inverted Pendulum

C.3.1) System Description (Inverted Pendulum)

Therefore, the [[non-linear system]], can be written like this,

C.3.2) Linearization (Inverted Pendulum) - good model!

Pick an operating point, for example, when the pendulum is straight and balanced,

Let's compute A,

Let's compute B,

Let's compute C,

After computing the [[Jacobians]], we get,

A=[01gL0],B=[01],C=[10]

Now we have a Linearized system around the Operating Point: θo=0, θ˙o=0, uo=0.

δx˙=Aδx+Bδu,δy=Cδx

we can expand this model like this,

[δθ˙δθ¨]=[01gL0][δθδθ˙]+[01]δu

finally,

[δθ˙δθ¨]=[δθ˙gLδθ+δu]

Interpretation:

  1. δθ˙: is the angular velocity, (as expected from the state representation).

  2. δθ¨=gLδθ+δu, this is the linearized second-order equation of motion.

    • This term shows that:
      • The pendulum's angular acceleration (δθ¨) depends on:
        • δθ (the angular displacement) scaled by gL.
        • δu (the input control).

C.4) Example 2: Unicycle Robot (non-linear system)

C.4.1) System Description (Unicycle Robot)

In other words,

x=[xyϕ] u=[vω] y=x

C.4.2) Linearization (Unicycle Robot) - it still doesn't give us a good model

And if we use this matrices for our Linearized system around the Operating Point, we get this,

δx˙=Aδx+Bδu[δx˙δy˙δϕ˙]=[000000000][δxδyδϕ]+[100001][δvδω][δx˙δy˙δϕ˙]=[000]+[δv0δω]

Which is really weird, since the small change in velocity in the y direction(δy˙), is 0, it means that the Unicycle Robot now can use the throttle to move in the x axis, it can steer a little bit, but all that is not enough to move to the y direction.

Understanding the Result:

  1. δy˙=0:

    • This implies that small changes in the control inputs (linear velocity δv or angular velocity δω) have no direct effect on δy.
    • Why?
      The model is linearized at a point where the robot is not moving, and the heading angle ϕ=0. In this scenario:
      • Motion in the y-direction arises from the coupling between angular motion (ω) and linear velocity (v).
      • Since ϕ=0, changes in ω do not result in y-direction motion during the linearized approximation.
  2. Physical Interpretation:

    • At the operating point:
      • Small changes in the linear velocity δv only affect motion along the x-axis.
      • Small changes in the angular velocity δω result in changes in orientation δϕ, but without coupling to δy in the linearized model.
    • This means that in this linearized framework, the unicycle cannot directly move in the y-direction.
  3. Limitations of the Linearization:

    • The linearized model loses important nonlinear dynamics, such as how angular velocity ω over time contributes to motion in the y-direction via a curved trajectory.
    • Reality: If the robot rotates (nonlinear behavior), its orientation would change, leading to movement in the y-direction. However, this is not captured in the linearized dynamics around the rest state.

Conclusion:
This linearized model is valid at the specific operating point, but it does not mean that the robot is fundamentally incapable of moving in the y-direction.

Observations

  • Linearization does not fully capture nonlinear dynamics.
  • Example Issue:
  • Linearized model suggests no control over y direction at θ=0.
  • Nonlinear system can achieve motion in y by turning and driving.


C.5) Key Takeaways: Linearization Process

  1. Linearization Process:

    • Approximate nonlinear systems around operating points.
    • Resulting models are simpler but only valid locally.
  2. Challenges:

    • Linearization may fail to capture essential nonlinear behaviors (e.g., unicycle model).
  3. Practical Use:

    • Linear models provide insights and simplify analysis.
    • Lessons from linear models often apply to nonlinear systems with modifications.
  4. Next Steps:

    • Explore systematic methods for deriving useful linear models.

D) Behavior and Solutions of Homogeneous LTI Systems (doing the Math) (🎦 video 3.4)

D.1) Introduction: Studying LTI System Behavior

In this lecture, we focus on understanding how [[LTI systems]] behave by finding their solutions and examining their dynamics.

The solutions we derive will allow us to talk about the system's behavior systematically.

Starting Point: Simplifying the System: We begin by simplifying the system,

x˙=Ax

This setup represents the physical dynamics of the system without external interference.
It is also known as a [[homogeneous linear system]], as there is no external forcing term.

|300


D.2) Case 1: Using Scalars to Building Intuition

Let’s first analyze a scalar version of the [[homogeneous system]], where x is a single number, not a vector:

x˙=ax

Note: this is a [[first-order linear differential equation]], and to solve it we can notice it is a [[separable differential equation]], so we apply that method to find a solution.

The solution to this equation is:

x(t)=ea(tt0)x0

where:

D.2.1) Verifying the Solution of the ODE

To confirm this solution:

  1. Check the initial condition:
    Substitute t=t0:

    x(t0)=ea(t0t0)x0=x0

    This satisfies the initial condition.

  2. Check the dynamics:
    Compute the time derivative of x(t):

    ddtx(t)=aea(tt0)x0


Video: https://youtu.be/m2MIpDrF7Es

Then, substituting back:

ddtx(t)=ax(t)

This matches the original equation x˙=ax.

Thus, the solution is valid. ✅


D.3) Case 2: Solving the Matrix Case (Extending to Higher Dimensions)

For systems where xRn, the dynamics are:

x˙=Ax

The solution remains similar but uses a [[matrix exponential]]:

x(t)=eA(tt0)x0

Here, eA(tt0) is the [[matrix exponential]], defined as a [[Taylor Series]]:

eAt=k=0Aktkk!

This extends the scalar exponential to matrices.

D.3.1) Why Matrix Exponentials Work

Definition of the exponential as a [[Taylor Series]] expansion:

The [[matrix exponential]] allows us to generalize the behavior of exponentials to multidimensional systems.

Especially, we care about the derivative, we can do Term-by-Term Differentiation to this Infinite Series,

Note1: take the derivative with respect to t,

(use the [[power rule]])

Note2: We can Re-indexing the summation to adjust the index and simplify the series.

Important observations:


D.3.2) State Transition Matrix (fancy name for the Matrix Exponential)

The matrix exponential eA(tt0) is also called the [[State Transition Matrix]] and is often denoted:

Φ(t,t0)=eA(tt0)

So we can rewrite our differential equation, using this new definition,

We can use t0,

x(t)=Φ(t,t0)x(t0)

or in general, use τ,

x(t)=Φ(t,τ)x(τ)

Just as before, the properties of the [[matrix exponential]] still apply,


D.4) Including Inputs: Find the General Solution for a Controlled System

For systems with inputs:

x˙=Ax+Bu

The solution incorporates the input's influence:

x(t)=Φ(t,t0)x(t0)+t0tΦ(t,τ)Bu(τ)dτ

Key Terms:

D.4.1) Validation of General Solution

  1. Check Initial Condition:
    At t=t0, the integral term vanishes:

    x(t0)=Φ(t0,t0)x(t0)=x(t0)

    This satisfies the initial condition.

  2. Check Dynamics:
    Differentiating x(t):

    • The first term contributes Ax.
    • The second term,

      requires us to use the [[Leibniz's Integral rule]] for differentiation under the integral sign:ddtt0tΦ(t,τ)Bu(τ)dτ=Φ(t,t)Bu(t)+t0tddtΦ(t,τ)Bu(τ)dτ

    After simplification,

    this matches the equation x˙=Ax+Bu.

Thus, the solution is valid. ✅


D.5) Using the General Solution for the Output Equation

General LTI System:

  1. State dynamics:x˙=Ax+Buwith General Solution,
2.Forsystemswithmeasuredoutputs:

y = Cx

The output is derived directly from the state solution. ![](https://i.imgur.com/bdKalmT.png) - - - ### D.5) Summary: the State Equation Solution After all the mathematical groundwork, the behavior of LTI systems can be expressed as the **State Equation**:

x(t) = \Phi(t, t_0) x(t_0) + \int_{t_0}^t \Phi(t, \tau) B u(\tau) , d\tau

where$Φ(t,t0)=eA(tt0)$.Thestatetransitionmatrix$Φ(t,t0)$iscriticalforsolvingLTIsystems:

\Phi(t, t_0) = e^

OutputEquation,isderiveddirectlyfromthestate:

y(t) = Cx(t)

- These solutions are foundational for analyzing system behavior. - This formalism allows us to describe the behavior of LTI systems comprehensively and prepares us to analyze **stability** and performance in subsequent lectures. - - - ## E) Stability in LTI Systems(🎦 video 3.5) > [!todo]- Stability > Video: [https://youtu.be/dvtaU00z2Wc](https://youtu.be/dvtaU00z2Wc "Compartir vínculo") ><iframe width="560" height="315" src="https://www.youtube.com/embed/dvtaU00z2Wc?si=eUeVxZHB79aE-X6v" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ### E.1) Introduction to Stability - In the previous lecture, we worked through the technical details to derive the *general solution of an LTI system*.

x(t) = \Phi(t, t_0) x(t_0) + \int_{t_0}^t \Phi(t, \tau) B u(\tau) , d\tau

- The reason wasn't just to enjoy mathematical rigor but to use that understanding to analyze **[[stability]]**. - [[Stability]] is crucial because if a system **blows up**, there's nothing we can do: - Quadcopters fall out of the sky. - Robots drive off into the void. - Cars crash into obstacles. ![](https://i.imgur.com/mku9YmD.png) *Remember* from [[Module 1 - Introduction to Controls and Dynamical Models]] ![](https://i.imgur.com/38MDQgE.png) - **Design objectives are layered**: 1. **First priority**: Ensure **stability**. 2. Once stable, work on tracking references, 3. then robustness, and efficiency. 4. Finally, optimize for speed, energy usage, etc. - - - ### E.2) Starting Simple: Scalar Systems with No Input (checking stability) Consider the scalar system:

\dot{x} = ax

Solution:

x(t) = e^{at} x(0)

**What Happens for Different $a$?** 1. **If $a > 0$:** - $e^{at}$ grows exponentially. - The system **blows up**. ![](https://i.imgur.com/BlrgIeV.png) 2. **If $a < 0$:** - $e^{at}$ decays exponentially. - The system **goes to zero** ($x(t) \to 0$). ![](https://i.imgur.com/xTKvgnZ.png) 3. **If $a = 0$:** - $e^{at} = 1$. - The system does **nothing**: $x(t) = x(0)$. ![](https://i.imgur.com/2mZOWzJ.png) Notice we only have 3 possible outcomes, ![](https://i.imgur.com/zrwWxBH.png) ![](https://i.imgur.com/ZjHyhB8.png) - - - ### E.3) Three Types of Stability 1. **[[Asymptotic Stability]]**: - The system is asymptotically stable for all initial conditions, if:

x(t) \to 0 \quad \forall x(0)
$$

  1. [[Instability]]:
    • The system is unstable if the initial condition that blows up the system exist:x(0)s.t. x(t)

  1. [[Critical Stability]]:
    • The system is critically stable if:
      • x(t) neither blows up nor converges to zero

E.4) Generalizing to Matrices: how do we determine stability?

Systems of the Form (with no controller input)

x˙=Ax

remember, these system have the solution,

x(t)=eA(tt0)x0

E.4.1) Recap: What are Eigenvalues?

For those of you who don't know/or do not remember what [[eigenvalues]] are.

What Are Eigenvalues?

Key Points About Eigenvalues:

  1. Nature:

    • [[Eigenvalues]] can be complex numbers (not limited to real numbers).
    • They generalize the concept of scaling factors for transformations in higher dimensions.
  2. Intuition:

    • [[Eigenvalues]] describe how the matrix A acts in the directions of its [[eigenvectors]].
    • Along the [[eigenvector]] direction, the matrix scales the vector by λ.
    • You can think of [[eigenvectors]] as "directions" and [[eigenvalues]] as "magnitudes" in those directions.

Why Are Eigenvalues Important?

How to Compute Eigenvalues?


E.4.2) Example: Matrix Systems (Saddle Points and Instability)

For the system:

x˙=Ax

we know the solution is:

x(t)=Φ(t,t0)x0

where:

Φ(t,t0)=eA(tt0)

We also know that the matrix exponential is given by the infinite series:

Φ(t,t0)=eA(tt0)=k=0(A(tt0))kk!=I+A(tt0)+A2(tt0)22!+A3(tt0)33!+

Now...
Computing Φ(t,t0) is computationally heavy for most A matrices.

But...
There is one category of A matrices that simplifies the computations.
These are the [[diagonal matrices]].

Properties of Diagonal Matrices:
There are some interesting properties of diagonal matrices.

First, recall the definition of the [[eigenvectors]] and [[eigenvalues]] of a matrix:

Av=λv,

which can be rewritten as:

(AλI)v=0,

where the [[eigenvalues]] are calculated using:

det(AλI)=0.

It turns out that for [[diagonal matrices]], we can always find the [[eigenvalues]] in their columns with their corresponding [[eigenvectors]].

For example, consider the following [[diagonal matrix]] A:

A=(λ100λ2)

where λ1 and λ2 are the eigenvalues, and:

v1=(10)

is the eigenvector associated with λ1, and:

v2=(01)

is the eigenvector associated with λ2.

How does this property help us?
Well, consider the same [[diagonal matrix]] A,

A=(λ100λ2)

As part of our model,

x˙=Ax

The computations simplify to the form:

Φ(t,t0)=(1+λ1(tt0)+λ12(tt0)22!+λ13(tt0)33!+001+λ2(tt0)+λ22(tt0)22!+λ23(tt0)33!+)

We can notice that these polynomials are special since they fit the definition:

eλt=1+λt+λ2t22!+λ3t33!+.

Therefore, we can simplify even further and say:

Φ(t,t0)=(eλ1(tt0)00eλ2(tt0))

Note: This analytical solution will save us time from computing the infinite series.

Visualize this solution:

x(t)=Φ(t,t0)x0

which simplifies to,

x(t)=(eλ1(tt0)00eλ2(tt0))x0

this creates the Saddle Point in the [[Phase portrait]],

Source here

This solution gives us the following intuition:

  1. The eigenvalues of the diagonal matrix A ( λ1 and λ2 ) directly control the behavior of the system, as seen in the exponential terms eλ1(tt0) and eλ2(tt0).
  2. Diagonal matrices simplify matrix exponentials because there is no need for off-diagonal computations, reducing computational overhead significantly.

E.4.3) Example: Matrix Systems (Imaginary Eigenvalues and Center Fixed Points)

Pendulum


Source here

Inverted Pendulum: unstable

E.4.4) Stability Conditions in Matrix Systems (using Eigenvalues)

For the system,

Conditions:

  1. Asymptotic Stability:
    • The system is asymptotically stable if and only if:Re(λ)<0λeig(A)

  1. Instability:
    • The system is unstable if:λeig(A)s.t. Re(λ)>0

  1. Critical Stability:
    • The system is critically stable if:
      • Re(λ)0λeig(A), (at least one eigenvalue has Re(λ)=0).
      • OR
      • two purely imaginary eigenvalues and the rest have negative real part.

E.5) Key Takeaways: Stability of Linear Systems

  1. Stability depends on the eigenvalues of A:
    • Asymptotically Stable: Re(λ)<0λ.
    • Unstable: λs.t. Re(λ)>0.
    • Critically Stable: Eigenvalues have Re(λ)0, with at least one on the imaginary axis.

  1. Eigenvalues give insight into system behavior:

    • Negative real parts: Stable behavior.
    • Positive real parts: Unstable behavior.
    • Zero or purely imaginary eigenvalues: Critical stability.
  2. Design Goal: Ensure the closed-loop system has eigenvalues with negative real parts to guarantee stability.


F) Swarm Robotics Control: The Consensus Equation (🎦 video 3.6)

F.1) Recap: Stability and Eigenvalues

From the previous lecture, we learned that [[eigenvalues]] play a fundamental role in understanding the [[stability]] properties of linear systems.

Specifically:

The focus today is to use these ideas to address a fundamental problem in swarm robotics:
The Rendezvous Problem.


More info here


F.2) The Rendezvous Problem in Swarm Robotics

Problem Statement:
In swarm robotics, we have a collection of mobile robots that:

  1. Measure relative displacement of their neighbors (e.g., xixj).
  2. Lack global positioning information, meaning they do not know their absolute positions.

Goal:
Achieve rendezvous: make all robots meet at the same position without pre-specifying the meeting point.

Challenge:
Robots do not know their global location (e.g., "the origin"). Instead, they must rely on local relative measurements of their neighbors.


F.2.1) The Two-Robot Case (check for stability)

For two robots (x1,x2):

  1. Define control laws (the agents simply aim towards each other):
u1=x2x1,u2=x1x2
  1. Write the system dynamics:

    x˙=Ax,A=(1111).
  2. Eigenvalues of A:

    • Using MATLAB (or similar), we find:$$
      \lambda_1 = 0, \quad \lambda_2 = -2.
      • One eigenvalue is 0.
      • The other eigenvalue is negative.

  1. Stability Analysis:
    • A zero eigenvalue suggests [[critical stability]].

F.2.1.1) Null Space of "A" (the rendezvous meeting point)

Did you know?: The critically stable system does not converge to the origin but instead to the null space of A.

Let's recap....
The null space of A is defined as:

Null(A)={xAx=0}.

It means, that the [[null space]] of A is the set of all vectors x that satisfy Ax=0

So, when our system is,

x˙=Ax
  1. If λi=0, then eλit=1, so this component remains constant.
  2. If λi​ has a negative real part, then eλit0 as t

Thus, all components of x(t) corresponding to eigenvalues with negative real parts decay to zero over time. The only remaining component is the one associated with λ=0. And this remaining components in x are located in the [[Null space]] (when x˙=Ax=0)

Example: The Two-Robot Case

For two robots (x1,x2):

with A=(1111):

Null(A)={x=(α,α)αR}.

Interpretation: the states x1 and x2 evolve into the null-space of A.

Thus, rendezvous is achieved, even though the final position (α) is unknown. It is somewhere in the [[Null Space]] of A


F.2.2) Extending to Multiple Robots

General Control Law:
For multiple robots, let each robot i move toward the centroid of its neighbors (this is known as the [[consensus equation]]):

x˙i=jNeighbors(i)(xixj).

Stacked System:
Stacking all robot positions into a vector x,

the system can be written as:

x˙=Lx,

where L is the [[Graph Laplacian Matrix]], describing the communication structure (who talks to whom).

Graph Laplacian Properties:

  1. L has:
    • One zero eigenvalue (λ1=0) if the graph is connected.
    • All other eigenvalues are positive (λ2,λ3,>0).
  2. L has:
    • One zero eigenvalue (λ1=0).
    • All other eigenvalues are negative (λ2,λ3,<0).
    • Therefore the multiple-robot case has Critical Stability
F.2.2.1) Null Space of "L" (the rendezvous meeting point)

The null space of L determines the consensus value. If the graph is connected:

Null(L)={x=(α,α,,α)αR}.

Key Insight:

The corresponding system is critically stable, as L has the necessary eigenvalue properties.


F.3) Real-World Implementations

Practical Example:- Two robots executing the algorithm in real-time

Control Design: Use a simple PID go-to-goal controller:

Key Result:

Other Simulation: In simulation, robots:

  1. Navigate their environment.
  2. Avoid obstacles.
  3. Achieve rendezvous or other formations.


F.4) Conclusion: Solution to the Rendezvous Problem

  1. The rendezvous problem is solved using:

    x˙=Lx.
  2. Critical stability ensures convergence to the null space of L.


More info here

  1. The [[consensus equation]] can be extended to solve more complex multi-robot tasks, including:
    • Splitting into subgroups.
    • Avoiding obstacles.
    • Discovering missing agents.

By building on these foundations, you can design sophisticated algorithms for swarm robotics and network control systems.


G) Attempting Linear System Stability with Output Feedback Control: Eigenvalue Analysis (🎦 video 3.7)

G.1) Recap: Stability and Eigenvalues

[[Stability of a linear system]] depends on the eigenvalues of its system matrix.

Today, we aim to achieve asymptotic stability by designing a feedback controller.


G.2) State-Space System and Output Feedback

We start with a linear system in state-space form:

x˙=Ax+Bu,y=Cx,

where:

Goal:
Design a feedback controller that uses the output y to stabilize the system.


G.3) Example1: World's Simplest Robot

We revisit the simple robot system, which is a point mass on a line:

with the states being,

remember the State Dynamics written in the form,

x˙=Ax+Bu

then we can obtain the matrices,

which is simply,

x˙=(0100)x+(01)u,y=(10)x.

State Variables:

Now our job is to try to find a way to connect y with u,

G.3.1) Idea 1: Create a Simple Position-Based Controller (part1)

Position-Based Control:
Our goal is to stabilize the system, meaning we want to drive the state of the robot to zero, or equivalently, to the origin.

Behavior:

in other words,
The control input u should be proportional to the negative of the position y.

In general, this simple proportional controller can be written mathematically as,

u=Ky,

where K>0 is a proportional gain.
Note: K is a scalar right now, in the future we will see in which case K can be a matrix gain.


G.3.1.1) How does this Output Feedback change the System Dynamics?

Recall the system is,

x˙=Ax+Bu,y=Cx,

By applying the control law u=Ky, we modify the system dynamics.

  1. Recall that the output is:

    y=Cx.
  2. Rewriting the control law u in terms of x:

    u=Ky

    in terms of x,

    u=Ky=K(Cx),
  3. Substituting u=K(Cx) into the state equation:

x˙=Ax+Bu,

gives:

x˙=AxBKCx.
  1. Defining the new system matrix:Anew=ABKC.

Our next task is to analyze the eigenvalues of Anew to determine the stability of the modified system.

Or in other words...

our task is to choose K such that the eigenvalues of Anew have strictly negative real parts.


G.3.2) Idea 1: Create a Simple Position-Based Controller (part2)

Recall the system is,

x˙=Ax+Bu

and now with output feedback, the system is,

x˙=AxBKCx

in other words,

x˙=Anewx

Remember that for our simple robot, we chose the Control Law:

u=Ky=Kx1,

where K>0.

Substituting u into the system:

x˙=(0100)x(01)K(10)x.

So, our robot system is now,

The new system matrix becomes:

Anew=(0100)(01)K(10)=(01K0).
G.3.2.1) Eigenvalue Analysis (Problem with Output Feedback)

To analyze stability, compute the eigenvalues of Anew:

det(AnewλI)=det((λ1Kλ))=λ2+K.

The eigenvalues are:

λ=±jK,

where j=1.

Interpretation:

Problem with Output Feedback
Although K>0 stabilizes position, the system oscillates because it does not account for velocity. Specifically:

  1. When moving away from the origin, the control input u pushes toward the origin (correct behavior).
  2. When moving toward the origin, the control input u continues to push, causing overshoot (oscillatory behavior).

Insight:
We need to consider the full state (x1 and x2) rather than just the output (y=x1) to achieve asymptotic stability.


G.4) A Better Solution: Pole Placement with Full State Feedback

Instead of output feedback (u=Ky), we use state feedback:

u=Kx,

where K=(k1k2) includes gains for both position (x1) and velocity (x2).

Substituting into the system:

x˙=(ABK)x,

with:

ABK=(0100)(01)(k1k2)=(01k1k2).

Eigenvalue Placement:
By choosing k1 and k2 appropriately, we can ensure all eigenvalues of ABK have strictly negative real parts, achieving asymptotic stability.


G.5) Next Steps: Design State Feedback Gain and Estimate with State Observers

  1. In the next lecture, we will:

    • Design state feedback gains (k1,k2) to place the eigenvalues of ABK in desired locations.
    • Ensure the system is asymptotically stable.
  2. For cases where the full state x is not directly measurable:

    • In Module 4 we will explore methods to estimate x from y (e.g., using [[state observers]]).

H) Stabilizing the System Using State Feedback (🎦 video 3.8)

H.1) Recap: Output feedback control and Stability

The key insight from last time was that to stabilize the system, we need the real parts of the eigenvalues to be strictly negative.

However, we couldn't achieve this using only output feedback because it relied solely on the output y and ignored the full state information x.

Output Feedback

The eigenvalues are:

λ=±jK,

where j=1.

Today, we assume access to the full state x and design a state feedback controller.

Let's get started!


H.2) State Feedback Control Law

  1. The state-space representation of the system is:
x˙=Ax+Bu,

where x is the state vector and u is the control input.

  1. We propose the following state feedback control law:
u=Kx,

where K is a matrix of feedback gains. This matrix determines how strongly we react to each state variable.

  1. Substituting u=Kx into the system dynamics:

    x˙=Ax+Bu=Ax+B(Kx),

    simplifies to:

    x˙=(ABK)x.
  2. The new system matrix:

    A^=ABK,

    is called the [[closed-loop dynamics matrix]].

Our task is to design K such that all eigenvalues of A^ have strictly negative real parts to achieve asymptotic stability.


H.3) Example: State Feedback for the Simplest Robot

Let's revisit the simplest robot system, which is a point mass on a line:

with the states being,

remember the State Dynamics written in the form,

x˙=Ax+Bu

then we can obtain the matrices,

Its dynamics are:

x˙=(0100)x+(01)u.

Here, x represents the position and velocity of the robot:

The control input u is the acceleration.

H.3.1) Designing the Feedback Gain

For this system:

If we set:

K=(k1k2),

then the closed-loop dynamics matrix becomes:

A^=ABK=(0100)(01)(k1k2).

Simplifying:

A^=(01k1k2).

in other words, closed loop dynamics of the system is now the following,


H.3.2) Analyzing the System with Specific Gains (Trial-and-Error)

In Module 4, we will pick gains in a systematic manner, but for now let's do trial & error,

Case 1: k1=1, k2=1
For k1=k2=1, the closed-loop matrix becomes:

A^=(0111).

The eigenvalues of this matrix can be computed (e.g., using MATLAB or by solving the [[characteristic equation]]) and are:

λ1,2=0.5±j0.866

Case 2: k1=0.1, k2=1
Now, let’s reduce k1 to 0.5 (less aggressive reaction to position).

Note: comparison with a [[PD Controller]]

k1 tells me how much I react to position and k2 tells me how much I react to Velocity.

K=(k1k2),

You can almost think of this as a [[PD regulator]] because P is the position in this case and D is the derivative of position which is velocity and that's what K2 is is affecting.

The closed-loop matrix becomes:

A^=(010.11).

The eigenvalues of this matrix are:

λ1,2=0.8873,0.1127.
H.3.2.1) Key Observations: Eigenvalues and Behavior
  1. Eigenvalues Determine Behavior:

    • Negative real parts: System is stable.
    • Imaginary components: Oscillations are present.
    • Smaller negative real parts: Slower response.
    • Larger negative real parts: Faster response.
  2. Trade-offs in Design:

    • Increasing k1 and k2 reduces oscillations but may slow down the system.
    • Decreasing k1 and k2 can speed up the response but might introduce instability or excessive oscillations.
  3. Dimensional Analysis:

    • For xRn and uRm, the feedback gain K must have dimensions m×n.

H.4) Next Steps: Goals for Module 4

In the next module:

For now, we have seen that state feedback control is a powerful method for stabilization, provided we have full access to the system state x.


I) Glue Lecture 3 - Systems and State-Space Representation (🎦 video 3.9)

I.1) Systems Overview - What is a System?

In this course, we define systems by how inputs relate to outputs.

This is represented by the state-space form:

x˙=Ax+Bu,y=Cx,

where:

The goal is to:

  1. Understand inputs and outputs and their relationship to the system.
  2. Convert a second-order system into state-space form.
  3. Linearize a nonlinear second-order system.

I.2) Example 1: Converting a Linear Second-Order System to State-Space Form

Given a [[second-order differential equation]]:

mf¨=αf˙+βf+cp,

in general we can see that:

I.2.1) Step 1: Choose State Variables & define Inputs and Outputs

We select:

I.2.2) Step 2: Rewrite Second-Order as a pair of First-Order Equations

Using the definitions of x1 and x2,

f˙=x˙1=x2,f¨=x˙2=βmx1+αmx2+cmu.

in other words,

I.2.3) Step 3: Represent in State-Space Form

The equations can be expressed as:

x˙=Ax+Bu

where:

A=(01βmαm),B=(0cm).

The output equation y=f=x1 can be written as:

y=Cx,

where:

C=(10).

Thus, the system is represented in state-space form:

x˙=(01βmαm)x+(0cm)u,y=(10)x.

I.3) Example 2: Linearizing a Nonlinear System

Consider a [[nonlinear second-order differential equation]]:

z¨=γz˙+lz2+cτ

Now we need linearize it around an equilibrium point x=0 (meaning at z0=0 and z˙0=0)

I.3.1) Step 1: Choose State Variables & define Inputs and Outputs

We select:

I.3.2) Step 2: Rewrite Second-Order as a pair of First-Order Equations

Using the state variables:

x˙1=x2,x˙2=γmx2lmx12+1mu.

in other words,

I.3.3) Step 3: Linearize Around an Operating Point (use the Jacobian Matrix)

At the operating point (x1,x2)=(0,0), we use the [[Jacobian Matrix]],

where,


Notice: the nonlinear term x12 vanishes.

we do the same with B,

The linearized equations become:

δx˙1=δx2,δx˙2=(γ)δx2+(c)δu.

I.3.4) Step 4: Represent in State-Space Form

Now we have a Linearized system around the Operating Point: zo=0, z˙o=0, uo=0.

δx˙Aδx+Bδu,δyCδx

where:

A=(010γ),B=(0c).C=(10).

and δx,δu,δy are the difference with respect to the Operating Point.

we can expand this model like this,

δx˙=(010γ)δx+(0c)δu,δy=(10)δx

I.4) Key Insights about Module 3

  1. State-Space Representation:

    • Convert second-order differential equations into first-order form.
    • Identify the matrices A,B,C for state-space representation.
  2. Linearization:

    • Linearize nonlinear systems around a specific operating point.
    • Compute the A-matrix by taking partial derivatives with respect to the state variables.
  3. Matrix Dimensions:

    • Ensure consistency in dimensions when setting up A,B,C.

J) Programming and Simulation: Go-to-Goal Controller (🎦 video 3.10)

Video: https://youtu.be/5ZFk8MJsJeg

J.1) Overview: Go-to-Goal Controllers

This lecture focuses on Go-to-Goal (GTG) Controllers, used to steer mobile robots from Point A to Point B.

In this week’s programming assignments, you will implement a PID-based Go-to-Goal controller.

This involves:

  1. Implementing Proportional (P), Integral (I), and Derivative (D) terms of the PID controller.
  2. Adjusting the gains for optimal performance.

Key Notation:


J.2) Controller Design (Go-To-Goal)

H.3) PID Controller Implementation (Key Variables)

  1. Memory for PID terms:
    • accumulated_error (E_k): Tracks the total error for the Integral term.
    • previous_error (e_k_1): Tracks the error from the previous step for the Derivative term.

  1. Gains for PID terms:
    • kp: Proportional gain.
    • ki: Integral gain.
    • kd: Derivative gain.


J.3.1) Skeleton Code for execute Function

The execute function in GoToGoal.m is responsible for:

  1. Calculating the heading to the goal and the error between the robot’s orientation (θ) and the desired heading (θg).
  2. Computing the PID terms and combining them to calculate ω (angular velocity).
  3. Saving the error values for the next step.

Code Block:

function [v, w] = execute(obj, x, y, theta, xg, yg, linear_velocity)
    % 1. Compute the heading to the goal
    theta_g = atan2(yg - y, xg - x); % Angle to goal
    error = theta_g - theta;         % Orientation error

    % Normalize the error
    error = atan2(sin(error), cos(error));

    % 2. Compute the PID terms
    proportional_term = obj.kp * error;

    % Integral term (update accumulated error)
    obj.accumulated_error = obj.accumulated_error + error;
    integral_term = obj.ki * obj.accumulated_error;

    % Derivative term (calculate difference with previous error)
    derivative_term = obj.kd * (error - obj.previous_error);

    % 3. Compute angular velocity
    w = proportional_term + integral_term + derivative_term;

    % 4. Save errors for next step
    obj.previous_error = error;

    % Output linear velocity and angular velocity
    v = linear_velocity;
end

J.4) Demo: PID Controller Behavior

J.4.1) Desired vs. Actual Behavior (Output Graph)

Example graph output:

Key objectives:

  1. Minimize the difference between the blue and red lines.
  2. Adjust gains to:
    • Minimize overshoot.
    • Reduce oscillations.
    • Ensure stability.

J.4.2) How to Stop the Robot at the Goal: Stop Condition and Goal Adjustment


J.5) Summary: Mobile Robot Go-To-Goal Controller - PID

  1. Implementing PID:
    • Implement all three PID terms in the execute function.
    • Use memory variables to track errors for the Integral and Derivative terms.
  2. Testing and Optimization:
    • Test your controller with the provided stop condition and adjustable goal.
    • Tune the PID gains (kp, ki, kd) to achieve minimal overshoot and smooth performance.
  3. Expected Results:
    • The robot should steer smoothly to the goal with minimal oscillations and overshoot.
    • The angular velocity (ω) computed by the PID controller aligns the robot's orientation with the desired goal direction.

K) Hardware Notes (🎦 video 3.11)

Video: https://youtu.be/-jjnv6qkQ_8


🔙 Previous Part | Next Part 🔜

↩️ Go Back


Z) 🗃️ Glossary

File Definition
Uncreated files Origin Note
Asymptotic Stability Module 3 - Introduction to Linear State-Space Control Theory
characteristic equation Module 3 - Introduction to Linear State-Space Control Theory
characteristic equation Module 3 - Introduction to Linear State-Space Control Theory
closed-loop dynamics matrix Module 3 - Introduction to Linear State-Space Control Theory
consensus equation Module 3 - Introduction to Linear State-Space Control Theory
consensus equation Module 3 - Introduction to Linear State-Space Control Theory
Consensus Equation Module 3 - Introduction to Linear State-Space Control Theory
Convolution Module 3 - Introduction to Linear State-Space Control Theory
Convolution Integral Module 3 - Introduction to Linear State-Space Control Theory
critical stability Module 3 - Introduction to Linear State-Space Control Theory
Critical Stability Module 3 - Introduction to Linear State-Space Control Theory
diagonal matrices Module 3 - Introduction to Linear State-Space Control Theory
diagonal matrices Module 3 - Introduction to Linear State-Space Control Theory
diagonal matrix Module 3 - Introduction to Linear State-Space Control Theory
diagonal matrix Module 3 - Introduction to Linear State-Space Control Theory
eigenvalue Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
Eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
Eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
Eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
Eigenvalues Module 3 - Introduction to Linear State-Space Control Theory
eigenvector Module 3 - Introduction to Linear State-Space Control Theory
eigenvector Module 3 - Introduction to Linear State-Space Control Theory
eigenvectors Module 3 - Introduction to Linear State-Space Control Theory
eigenvectors Module 3 - Introduction to Linear State-Space Control Theory
eigenvectors Module 3 - Introduction to Linear State-Space Control Theory
eigenvectors Module 3 - Introduction to Linear State-Space Control Theory
Eigenvectors Module 3 - Introduction to Linear State-Space Control Theory
first-order linear differential equation Module 3 - Introduction to Linear State-Space Control Theory
first-order linear ordinary differential equation Module 3 - Introduction to Linear State-Space Control Theory
Full State Feedback Control Module 3 - Introduction to Linear State-Space Control Theory
General State-Space Equations Module 3 - Introduction to Linear State-Space Control Theory
Graph Laplacian Matrix Module 3 - Introduction to Linear State-Space Control Theory
Graph Laplacian Matrix Module 3 - Introduction to Linear State-Space Control Theory
homogeneous linear system Module 3 - Introduction to Linear State-Space Control Theory
homogeneous system Module 3 - Introduction to Linear State-Space Control Theory
Instability Module 3 - Introduction to Linear State-Space Control Theory
Jacobian Module 3 - Introduction to Linear State-Space Control Theory
Jacobian Matrix Module 3 - Introduction to Linear State-Space Control Theory
Jacobian Matrix Module 3 - Introduction to Linear State-Space Control Theory
Jacobians Module 3 - Introduction to Linear State-Space Control Theory
Jacobians Module 3 - Introduction to Linear State-Space Control Theory
Jacobians Module 3 - Introduction to Linear State-Space Control Theory
Jacobians Module 3 - Introduction to Linear State-Space Control Theory
Leibniz Integral Rule Module 3 - Introduction to Linear State-Space Control Theory
Leibniz's Integral rule Module 3 - Introduction to Linear State-Space Control Theory
linear combination Module 3 - Introduction to Linear State-Space Control Theory
linear systems Module 3 - Introduction to Linear State-Space Control Theory
Linear Systems Module 3 - Introduction to Linear State-Space Control Theory
Linear Time-Invariant (LTI) Systems Module 3 - Introduction to Linear State-Space Control Theory
Linear Time-Invariant (LTI) Systems Module 3 - Introduction to Linear State-Space Control Theory
LTI systems Module 3 - Introduction to Linear State-Space Control Theory
Matrix Diagonalization Module 3 - Introduction to Linear State-Space Control Theory
Matrix Diagonalization Module 3 - Introduction to Linear State-Space Control Theory
matrix exponential Module 3 - Introduction to Linear State-Space Control Theory
matrix exponential Module 3 - Introduction to Linear State-Space Control Theory
matrix exponential Module 3 - Introduction to Linear State-Space Control Theory
matrix exponential Module 3 - Introduction to Linear State-Space Control Theory
matrix exponential Module 3 - Introduction to Linear State-Space Control Theory
Matrix Exponentials Module 3 - Introduction to Linear State-Space Control Theory
Matrix-vector multiplication Module 3 - Introduction to Linear State-Space Control Theory
Matrix-vector multiplication Module 3 - Introduction to Linear State-Space Control Theory
non-linear model Module 3 - Introduction to Linear State-Space Control Theory
non-linear system Module 3 - Introduction to Linear State-Space Control Theory
non-linear systems Module 3 - Introduction to Linear State-Space Control Theory
nonlinear second-order differential equation Module 3 - Introduction to Linear State-Space Control Theory
nonlinear systems Module 3 - Introduction to Linear State-Space Control Theory
null space Module 3 - Introduction to Linear State-Space Control Theory
Null space Module 3 - Introduction to Linear State-Space Control Theory
Null Space Module 3 - Introduction to Linear State-Space Control Theory
Null Space Module 3 - Introduction to Linear State-Space Control Theory
odometry Module 3 - Introduction to Linear State-Space Control Theory
Ordinary Differential Equations (ODE) Module 3 - Introduction to Linear State-Space Control Theory
PD Controller Module 3 - Introduction to Linear State-Space Control Theory
PD regulator Module 3 - Introduction to Linear State-Space Control Theory
Phase portrait Module 3 - Introduction to Linear State-Space Control Theory
power rule Module 3 - Introduction to Linear State-Space Control Theory
scalar exponential Module 3 - Introduction to Linear State-Space Control Theory
second-order differential equation Module 3 - Introduction to Linear State-Space Control Theory
separable differential equation Module 3 - Introduction to Linear State-Space Control Theory
stability Module 3 - Introduction to Linear State-Space Control Theory
stability Module 3 - Introduction to Linear State-Space Control Theory
Stability Module 3 - Introduction to Linear State-Space Control Theory
Stability of a linear system Module 3 - Introduction to Linear State-Space Control Theory
Stability of Linear Systems Module 3 - Introduction to Linear State-Space Control Theory
Stability of Linear Systems Module 3 - Introduction to Linear State-Space Control Theory
state observers Module 3 - Introduction to Linear State-Space Control Theory
state transition matrix Module 3 - Introduction to Linear State-Space Control Theory
State Transition Matrix Module 3 - Introduction to Linear State-Space Control Theory
Taylor Series Module 3 - Introduction to Linear State-Space Control Theory
Taylor Series Module 3 - Introduction to Linear State-Space Control Theory
Taylor Series Module 3 - Introduction to Linear State-Space Control Theory
Tayor Series Module 3 - Introduction to Linear State-Space Control Theory