Module 1 - Introduction to Controls and Dynamical Models


#Physics #Engineering #Control_Theory #Robotics

Previous Part | Next Part 🔜
↩️ Go Back

Table of Contents:


A) Introduction: Mobile Robots and Control Theory (🎦 video 1.1)

"Welcome to the course, my name is Magnus Egested and I'm a professor at Georgia Tech in the school of Electrical and Computer Engineering"
|299

The focus of this course is,

Meaning that they don't slam into obtacles, they move smoothy and nicely without having weird oscillations in their motion.
|400

The trick to achieve this is something called "Control Theory". This first module of the course is going to focus on "controls" and "control theory"

"But first let me talk you about my own research....
|170

My research hammer is Control Theory, it deals with how do you make things move or behave well."

This could be:

Control Theory is not only for robotics. It's a general language for describing how things behave.

A.1) Types of Robotics (examples)

"Control Theory is my hammer, but my nail is Robotics. I want robots to be elegant and useful.

In my lab we do 3 types of robotics:"

A.1.1) Swarm Robotics

1) Swarm Robotics, this is how do you get lots and lots of robots to do exciting things together.

Note

Swarm Robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots.


It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.


A.1.2) Behavior Based Robotics

2) Behavior Based Control (or Behavior-based robotics (BBR)), we will see quite a bit of this in this course.

A Behavior is basically a controller that is in charge of some task like:

|500

The question is, how do you design these behaviors and how do you patch them together?

Note

Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links.
|400
|400


A.1.3) Field Robotics

3) Field Robotics, this is how do you take robots and put them out in the unstructured, strange world outside of the lab.

Meaning, if you have a robot driving through a forest, all of a sudden there are no corridors, hallways or doors and life becomes more complicated.

Note

Field Robotics aims to bring technologies that allow autonomous systems to assist and/or replace humans performing tasks that are difficult, repetitive, unpleasant, or take place in hazardous environments.


A.2) Using Control Theory in Robotics

Control Theory is a key role when we want to learn how to control systems. First we will learn it in general (without robotics).

But then we are going to study the robotic applications by looking at robot models (because not all robots are the same)

Some robots are:

And different type of robots require different type of controllers.


B) Course Content

What is in the course?

We are going to look in general at Mobility, meaning, how do you make robots move?
(That is the main question we are interested in this course).

And then we will end with applications.


What is NOT in the course?

  1. The course is not about "Artificial Intelligence"", which is the high level planing . In other words, what should the robots actually be doing?*

For us, the questions:

  1. We are not going to deal much with "Perception", meaning

We are going to assume that someone else has done this, and we're going to act on the perceptual data that we already have.

  1. This course is not a "mechanical engineering course", meaning, we are not going to be building robots or dealing with different kinds of robotic designs.

Note

However, we will indeed be looking at some hardware aspects of robotics.


B.1) Course Structure

C) Install the GUI Layout Toolbox for MATLAB


Original Link: https://www.mathworks.com/matlabcentral/fileexhange/47982-gui-layout-toolbox
Alternative Link: https://github.com/giandbt/Control-of-Mobile-Robots/tree/master/Tools

C.1) Learn More About MATLAB <-------------------------


https://www.mathworks.com/learn/tutorials/matlab-onramp.html


A) What is Control Theory anyway? (🎦 video 1.2)

Video

Control theory is an interdisciplinary field of engineering and mathematics, which has to do with the behavior of Dynamic Systems.  The input of a system is called a reference.  When one or more output variables of a system need to follow a certain reference over time, a controller manipulates the input to the system to obtain the desired effect on the output of the system feedback).

So remember,

For example: Robots, Epidemics, stock markets, thermostats, circuits, Engines, Power grids, autopilots, etc.


A.1) The Basic Building Blocks of a Control System

Let's start with trying to build up a "Control System" in terms of the basic needed building blocks.

A.1.1) The State of the System (x)

1st) We need some way of describing what the system is doing or where it is (in its behaviors. To do so, we define the "state" of the dynamic system.

Remember

The State is a representation of what the system is currently doing.

And we are going to use an "x" to describe what the state of the system is.

The State could be:

A.1.2) The Dynamics of the Sytem

2nd) Now that we know what the state of a system is, the description of the change of the state as a function of time is known as the "dynamics" of the system.

It tells us what the system is actually doing.

Remember

The Dynamics is a description of how the state changes


A.1.3) The Reference Signal (r)

3th) Now that we have the building blocks of a dynamical system, we want some way of influencing its behavior so that it does what we want. We are going to introduce a "reference signal in order to tell the system what it is we want it to do.

Remember

The Reference is what we want the system to do.

And we are going to use an "r" for the reference

The reference could be:


A.1.4) The Output of the System (y)

4th) How can I know if the behavior of my system is close to the reference signal? We need to measure the "output" of the system, now we can compare.

Remember

The Output is the measurement of (some aspects of) the system

And we are going to use a "y" for the output.

Note: Since we can't always measure the "state". The "outputs" are things that we are able to get out of the system, things we can measure.


A.1.5) The System Input (u)

5th) Now we need some way of mapping "reference signals" into actual "control signals" (also known as "inputs").

Remember

The Input is the control signal

And we are going to use "u" to for the input which is going to take the reference and produce a control signal that then hits the state of the system.

A.1.6) The Feedback and the Error Signal

6th) A good control design uses the measurements of the system in order to produce a proper control signal. So we need a final building block known as "feedback"

Remember

The Feedback is the mapping from outputs to inputs

We can take the difference between the reference signal and the output signal and produce an “ error signal", which can be translated into a control signal.

This is known as a "Closed loop system"


A.1.7) Summary - The Basic Building Blocks


B) On the Need of Mathematical Models (🎦 video 1.3)

Video

We have seen that "controls" deals with "dynamic systems" in generality, and robotics is one facet of this.

Now, we will make sense of what this means with a precise language like mathemantics. And to do this, we are going to need “models”

Models are an approximation, an abstraction of what the actual system is doing. And the "Control design" is going to be done relation to that model and then deployed on the **"real system"

The main question in "Control Theory" is:

B.1) Main "Performance Objectives" in Control Design

There are many objectives we can achieve when choosing a control signal (we will list them from the most critical to the less critical).

  1. Stability Objective:  loosely speaking, it's when the system doesn't blow up. If you design a controller that makes the system go unstable, then no other objective matters because you have failed.

  2. Tracking Objective: which means we can make the system output "y" be equal to the reference "r". This means we can make robots follow paths and self driving cars follow roads.

  3. Robustness Objective: this means that our controller (that was designed with a model), also works with the real system.

Note

Remember that models are never going to be perfect, so we can't overcommit to a particular model and we can't have our controller be too highly dependent on what the particular parameters of a model are.

We need to design controllers that are "immune" to variations across parameters in the model.

  1. Disturbance Rejection Objective: Similar to robustness, the real would brings more complications. The controller has to be able to overcome at least a reasonable disturbances for the controller to be effective.

These disturbances can be:

  1. Optimality Objective: this answers the questions - How do we do something the best possible way? And "best" can mean different things like: being the fastest, or using the least amount of fuel, etc.


B.2) Dynamical Models

So… What do these models look like?

B.2.1) Models in Discrete Time

Let's start in "Discrete time",

We can describe x (the state of a system in an instant of time k+1 as a function of it's previous state (at the instant k) and of u ( the system input) at the instant k.

We can understand this with the simplest example of a "Discrete Time System", a clock.

We can also plot this system.
|400

It is a trivial system, but it is a Difference equation none the less and it changes over time.

Remember

Dynamics = Change Over Time

B.2.2) Models in Continuos Time

But we have a problem...

The laws of physics are all in "Continuous time".

Instead of describing the "next" state, we need derivatives with respect to time.

We can now describe the instantaneous rate of change of x (the state of the system at any instance t

Now, what would be the "Differential Equation" of our clock example?

It would be very simple, we would say that the rate of change of a clock would be 1 second each second, or with Newton's notation:

|270

And we can also plot "one solution" of this differencial equation, we consider that the initial conditions of the clock are zero, x(0)=0
|400

When we work with Dynamic Models, we are going to need almost always a continuous time model, because nature is continuous.

|450

But our implementation with robots and computers run in "discrete time", so we need to know how to go from continuous time to discrete time.


B.3) From Continuous to Discrete

We need to sample our continuous model in some time interval δt
(also known as "Sample time").

In other words,

So the main question is, what is xk+1?

We can approximate xk+1 with a "Taylor Series Approximation"

And if we use the first term only we get,

This is a way of getting a discrete time model from the continuous time model.

xk+1=xk+δtf(xk,uk)

And this is how we are going to have to take things we do in continuous time and map it onto the actual implementations of computers that ultimately run in discrete time.


C) Cruise Controller - Example (🎦 video 1.4)

Video

Now that we have a way of describing "Dynamical systems" with "Differential Equations" in "Continuous Time", or "Difference Equations" in "Discrete Time”.

Let's do something interesting and build a "Cruise Controller" for a car.

The job of a "Cruise Controller" is to make a car drive at a desired speed "r"

C.1) Defining the Model for a Car

We can model the car using the Laws of Physics, and we can use Newton's 2nd Law to describe how a car (which has mass) behave.

|400

Now, *what is the state of the system?
We need to somehow relate Newton's 2nd Law to the system and its state.

In this case, since we want the "Cruise Controller" to control the speed of the car, we can say that the state x is the speed or velocity of the car.

Important Note

With this example, we are going to work with "speed", we don't care much about the direction of the car.
|500

But how does this relate with Newton's 2nd Law?

From the equation,

We know that the acceleration is the derivative of velocity (the derivative of our state "x" ).

The equation, F=ma describes how the system (the car) behaves given a specific force "F".
So we can use "F" to control the speed of the car.

|450

We can define F as,

with this equation we are mapping "stepping on the gas or the brake" onto the "force" that
is applied to the car. And this is done through some linear relationship.

As it was shown before "c" is the electro-mechanical transmission coefficient.

But…

Remember, our control design cannot rely on us knowing "c", otherwise it won't be Robust. We can't have our controller be too highly dependent on what the particular parameters of a model are.

The same argument applies to the mass "m" of the car, the number of passengers can change, etc).

Let's now write the differential equation of our system.

We know that,

And if we equate the 2 equation of force that we have we get,

C.2) Defining the Control Signal

  1. Assume we can measure the velocity of the car.

  1. The control signal should be a function of the error signal "e"

  1. Question: What properties should the control signal have for our application case?

C.2) Control Design Basics - Comparing Control Designs (🎦 video 1.5)

Video

Let's continue with our "Cruise Controller" example and design a few controllers and compare them.

We have a "Car Model",

And we want the car to match the reference speed,

Note: In control design, we talk about "Asymptotic Properties", than means when t

which implies that,

We can also see it in the block diagram,

C.2.1) Attempt 1 - Bang-Bang Controller

Is this control strategy going to work?

Let's run a simulation with a reference speed r = 70

|400

It works beautifully!
The system is stable, and we have reached the desired speed without any overshoot.

This type of control is known as "Bang-Bang Control" because we are switching between 2 extremes (full gas and full brake)

Are we done? Not really, let's see the control effort with our control signal

|400

This is BAD!
First we accelerate until we reach the speed of 70 and then we start switching wildly between Umax and -Umax

Why? Because the system is to sensitive to any change in the measured speed and it will reach with full gas or full brake to compensate.

This will be a Bumpy ride and we will burn the actuators

Conclusion

This is not a good control design
Problem: The controller over-reacts to small errors

C.2.2) Attempt 2 - P-Regulator

This control design solves our previous problem, because a small error yields a small control signal. It gives a nice and smooth behavior.

It is known as a "P-Regulator", where "P" stands for "proportional" because the input "u" is directly proportional to the error through a controlled gain "kp"

Is this control strategy going to work?

Let's run a simulation with a reference speed r = 70

|400

It is nice, smooth and stable!

But there is a problem, the car never reaches a velocity of 70.
Even though the response is smooth we end up having Steady State Error, because we don't reach the reference.

Why did this happen?
Well, the problem is actually in our car model. We forgot an important force that affects cars: Wind Resistance

C.2.2.1) P-Regulator: Stability but Not Tracking

The "real" model is now augmented to include a wind-resistance term,

This is the model used in the previous simulations, and it is the reason why the P- Regulator failed with our Tracking Objective.

|400

Let's analyse our model at "steady-state", in other words, when x doesn't change any more,  x˙=0

Remember that,

Therefore,

Now we can solve for x and see why we have Steady State Error,

x will always be smaller than r, since r is being multiplied by a number smaller than 1.

|400


C.2.2.2) Control Theory: Recall the "Performance Objectives" (🎦 video 1.6)

Video

Let's recall the first three "Performance Objectives" we have in control and see how we can improve our "P-Regulator"

A controller should provide:

  1. Stability to the system - also known as BIBO Stability

In other words, if the input is reasonable, our system doesn't blow up.

Our P-Regulator can do this, so that is a win!

  1. Tracking - means we should get to the reference value that we want (or at least close enough, within a 2% error).

Our "P-Regulator" fails at this because the error at steady state is too much.

The reference is at 70 and we can only achieve 58 due to wind-resistance.
|400

  1. Robustness - means we shouldn't have to know much about parameters that we really have no way of knowing. And preferably we should be able to fight noise as well.

So, let's try to achieve "Tracking" with a new controller.

C.2.3) Attempt 3 - Forcing a Tracking Controller

What is this new term doing? Let's analyse at steady state, x˙=0,

Remember that,

Therefore, if we substitute for u,

We have lost "Robustness" with this control approach. Just take a look at our controller "u"

All of a sudden we have to know all these physical parameters that we don't know, so this is not a robust control design.

Let's go back to our "P-Regulator" and see what we can learn.

What is actually happening is that the proportional error is doing a fine job pushing the system up to close to where it should be, but then it can't push hard enough to overcome the effect of wind-resistance.

The error starts to accumulate over time. If we could "collect" all these errors over time and use them in our controller, we would eventually be able to speed up the car.

Spoiler Alert

We can use an "integral" and integrate e(t) in an interval of time, let's say from τ=0 (when we turn on the controller) to τ=t (the current time)

C.2.4) Attempt 4 - PI-Regulator

Note

The PI-Regulator is one of the most common regulators found anywhere in the world. In fact, it's almost 2/3 of all commercial grade cruise controllers

Let's see its time response,

It works perfectly!

The system behaves quickly, nice and smoothly.

The car reaches the reference of 70 mph.

It is stable, it can track the reference and it is robust since we don't need to know the parameters of the car.

Note

PI regulators can induce unwanted oscillations in some systems, that is why we also have:
PID - Regulators

This is an extremly useful controller that shows up a lot. It is a great control structure and we are going to get quite good at designing PID regulators.


C.3) PID Control (🎦 video 1.7)

Video

Last time, we saw that the PI-Regulator (or its slightly more elaborated brother, the PID - Regulator) was enough to make the cruise controller achieve Stability, Tracking, and Robustness.

Now we will see what are the effects of the controller gains and why the PID - Regulator is such an important regulator in virtually every industry.

C.3.1) Proportional Gain


C.3.2) Integral Gain


C.3.2) Derivative Gain


C.4) Fun Facts of the PID

So… PID is by far the most used "low-level controller"

You may ask: What is the difference between a low level control and a high level control?

Controllers that directly drive the hardware are low-level controllers whereas those that implement logical decision-making are high-level controllers.

The terms "high" and "low" are relative. Nested controllers ensure abstraction and code modularity.

You may also ask: Why does PID work with many systems so well?

Because feedback has a remarkable ability to fight uncertainty in model parameters

warning

With PID, stability is not guaranteed

C.5) Tuning the PID gains for the Cruise-Controller (Trial and Error)

C.5.1) Tuning Kp

We start by tuning the P-Regulator,

Say Kp= 1,
|450

we see that the system is stable, but we have no tracking since we can't reach r=1,

Let's Tune now the integral gain.

C.5.2) Tuning Ki

Tuning the integral gain we get,

Say Ki = 1,
|450

The system works great! We are done.

But what happens if me increase the integral gain? 🤔
Can we do it better? 🤔

Let's try with Ki= 10
|450

Now my system starts oscillating.

So this is an example of where the integral part may actually cause oscillations.

If we see oscillations it may be an indicator that Ki is to high.
Be careful !

C.5.3) Tuning Kd

Now let's tune the "D" part,

Let's try with Kd= 0.1,

|450

In this case, it doesn't actually matter too much.

Notice Kd is small because we know it can make the system sensitive to noise

So, how is this different from the PI - Regulator?

Notice we are getting a faster initial response, but then it is slower towards the end. So maybe a PID is not the best for our particular application.

This is some of the thinking that goes into tweeking PID regulators, in fact there are many methods for tuning the gains of a PID.

So what we are going to do next time is, we are going to go now from this rather abstract math to something that we can actually implement. And we are going to use a PID to control the altitude of a hovering Quadrator.

C.5.4) Extra Content 🎦 - How to Tune a PID Controller by RealPars

Video

#PLC #PID #Control_Theory

↩️ Go Back

Table of Contents:


A) Introduction - PID Controllers

PID is an acronym from "Proportional", "Integral", and "Derivative".

A PID Controller is a device that is used to control a process.

The controller can be a physical, stand-alone device or a control block found in a PLC function database.

The PID portion of the controller is a series of numbers that are used as adjustments in order to achieve your objective.

Some simple examples of controlled processes are:

Now let's discuss what the parameters of a PID controller are, and how they are used.

|350

In the most simplistic terms, the controller calculates the "P", "I", and "D" actions…

|350

...and multiplies each parameter by the error “E”, which is equal to,
"Set Point" - "Process Variable" in "Direct Acting”

Note: Direct Acting vs Reverse Acting

A Direct Acting Controller is one whose output tends to increase as the measurement signal increases.
|500
A Reverse Acting Controller is one whose output tends to decrease as the measurement signal increases.
|500


More examples:

Then, all parameter calculations are added up to produce the "Control Variable"

|350

Unfortunately, there is no industry standard on the parameter terms.
Here are some of the uses found today!

A.1) Proportional Term

The "Proportional" term, often called "P" constant, can be referred to as "Proportional Gain" or just "Gain", which is not a unit but instead a "ratio".

This parameter can also be called "Proportional Band" and measured in the unit of "percent"

The "Proportional Band" can also be written in terms of the "Proportional Gain",

Note:

This parameter can be called "KP", "Gain", Throttling Range (%TR), or others.

This is the parameter that determines how fast the system responds.

The name by which this parameter is referred varies with the manufacturer.

For controllers that use the term "Gain", adjusting this tuning parameter higher, may cause more sensitive, less stable loops.

Conversly, on controllers with “Proportional Band" units, decreasing this turning parameters affects the loop in the same manner.

Keeping this in mind, knowing the type of controller you have is essential to ensure that you are properly adjusting your parameters.

For more information visit:

http://h240.marcks.cc/downloads/06_pb_vs_gain.pdf
This text explains by which conditions the equation %PB = 100/P is true, and how to make the conversion between these parameters when the conditions are not met.

A.2) Integral Term

The Integral term or "I" constant, often called "Reset" can be in different measurements as well.

There are "repeats per second" (or "repeats per minute") and the parameter is known as Ki (or others).

There are "seconds per repeat" or "minutes per repeat" and the parameter is known as Ti (or others).

And these parameters are related by the following equation,

Essentially, regardless of the measurement type, the integral is the sum of all of the values reported from the PV signal, captured from when you started counting, to when you completed counting.

Or the area under a plotted curve.

The integral term determines how fast the steady state error is removed,

To adjust this parameter in "minutes per repeat" (Ti), we know that "smaller" values of Ti will create "larger" integral action.

|450

And "larger" values in "repeats per minute" measurements (Ki), will also create "larger" integral action.

|450

A.3) Derivative Term

Derivative or "D" constant units are typically “seconds" or "minutes".

The purpose of the "Derivative" constant is to predict change,

The derivative action acts on the rate of change measured in the Process Variable,

The value of this parameter basically means how far in the future you want to predict the rate of change.

|300

This parameter can help to create a faster response in your loop and a better performing loop as well.

However, since the "Derivative term" is measuring the rate of change in the Process variable, the Process variable must be a very clean signal.

Meaning no noise within the signal:

Note

For that reason, the Derivative terms are not often used in controls.


B) Algorithms and Parameters

The most commonly used controller is the PI.
Most processes can be well served with this type of control.

|350

P and PID controllers are occasionally used.

While PD controllers are rarely used.

|350

B.1) PID adjustable parameters

PID controllers also have many other adjustable parameters, and just to name a few we have:

B.2) PID algorithm types

PID controllers also implement different algorithm types,

For example:


C) PID tuning methods

There are many methods to tuning a PID loop, but the must widley tuning method is "trial and error",

C.1) Trial and Error

The goal is to control the process, when plotted in a trend, as a nice, even trend line with minimal oscillation.

C.2) Tune a PI Controller

Because the PI Controller is the most widely used, we will only be adjusting those parameters.

As we have discussed the differences in measurements in PID terms, for this tutorial we are going to standardize on "gain" and "repeats per minute"

C.2.1) "Jump right in" approach

We have two cases depending on how the Process Variable responds,

Note: Adjust only one parameter at a time and observe the results

C.2.2) Measured approach

Start with a low proportional gain, and with integral and derivative disable,

Watch the process and begin incrementally adjusting the gain by doubling the value.

When the process begins to oscillate, adjust the gain value down by 50%

Then repeat the process with the integral term. Employ a small integral value and watch the process.

Double the value incrementally until oscillation occurs then cut the integral by 50%

At this point, we are close to the SV and we can begin the fine-tuning process.

C.2.3) Extra: Heuristic Methods for PID tuning

For more information visit:

This video explains the 1st heuristic method for tuning a PID controller using a step response.
https://youtu.be/5WSq4Uv3JFI

For more information visit:

This video explains the 2nd heuristic method for tuning a PID controller using the period of oscillation of the closed loop system.
https://youtu.be/AAaWNNuqpuY

D) Discrete PID Implementation: Quadrator Example (🎦 video 1.8)

Video

Now we have a useful general purpose controller, the PID regulator.

We saw how it can be used to make a cruise controller.

What we are missing now is how do we turn all these "mathematics" into "executable code"? How do we make it run on a platform?

Well, first let's remember that computers work in discrete time.
So they have a Sample Time Δ t

There is a certain Clock Frequency on the computer and we are sampling at a certain rate.

And what we have to do is take the continuous time PID regulator and have it defined in discrete time.

D.1) Discrete P-term

In continuous time we have,

and the discrete version is trivial, we just need to sample the error,

D.2) Discrete D-term

In continuous time we have,

And we know that the derivative of the error is roughly,

In fact, as Δ t (the sample time) goes to 0, this becomes the definition of the derivative limit,

And we also now that we can store in memory the “old error". This means we can sample a "new error”, compute the derivative and then store the "new error" as “old”. So this is a good approximation for e˙.

Furthermore, we can modify KD and include Δ t in the calculation, we will call it KD

Then, our controller will be,

D.3) Discrete I - term

In continuous time we have,

But, **what is the integral geometrically speaking?

And we can approximate this area with rectangles,

And now we write the sum of these rectangles as,

So after some time Δ t passes, and I want to calculate my new integral, I just need to add a new rectangle to the sum,

Conclusion:
To calculate the approximate integral we just need to compute,

Which can be broken down to,

Then, our discrete controller would be,

But we can modify KI and include Δ t in the calculation, we will call it KI

Then, our controller will be,

And we just need to update "Enew" every cycle,

D.4) Discrete PID Controller

So our discrete PID controller is,

Now let's put it into pseudo-code:

D.5) Quadrotor Altitude Control (demo)

Let's now implement this PID controller to control the altitude of a hovering quadrotor.

Quadrotor Altitude Control: What is the model for altitude Control?

Let's do a demonstration in the lab,

Let's see a PID regulator in action, right now with our rudimentary model of a quadrator we will do altitude control only.

Let's turn it on! We can see that the system is stable (it is not falling down to the ground).

It is drifting a little bit sideways because we are not controlling drift at all.

I can push it down a bit and the controller is able to overcome the load, therefore it is also robust.

In terms of tracking, we would have to see the data of our Ultrasonic Sensor which is measuring the altitude.

E) Glue Lecture 1 - Dynamical Models (🎦 video 1.9)

Video

E.1) Dynamical Models

Q: What is a model?
Ans: A model is something that describes how a system changes (or evolves) with time.

The system could be: a robot
|300

And the model could describe, for example:

Q: what is the idea behind controls?
Ans: The idea behind controls is that we are going to influence this change to make our system do something we want to do,

E.2) Understanding what is a model (ball example)

E.2.1) Exercise in derivatives

Imagine we want to model a ball,

and we know that it's position x(t) is as a function of time,

|550

and if we graph this function we get the following,

And now we can take the derivative of the position with respect to time, x˙(t)

|400

We also know that the derivative of position is velocity, and if we graph this function we get the following,

We can do this further on, and take one more derivative, x¨(t).
Which we know is the acceleration.

|400

and if we graph this function we get the following,

E.2.2) Equations in Action

In action...
What does these equations mean outside the graphs?

Let's draw the ball and see how its position changes a long some axis with time,

E.2.3) Working with Differential Equations

Given the differential equation,

|200

How do we find x(t), in order to describe (or plot) the position of the ball?

We have two options:

  1. We can try to solve the differential equation and get x(t),
    Spoiler Alert: we already know the answer,

  2. We can discretize the world and use a Taylor Approximation, in order to get x(kδt) as we've learned before,

Let's cut the time axis in discrete amounts of δt=0.5,

And let's add a counter "k" to jump to different instants of time,

Now, our differential equation is,

|500

|500

And the Taylor Expansion is,

And if he apply it to our example we get,

|500

Therefore,

|300

is the difference equation that approximates our differential equation.

E.2.4) Visualize the difference equation

But, how does this translate into motion?

Let's go through the difference equation step-by-step,
|300

First, we know that x0=10 and t0=0 then,

when k=0,

when k=1,

when k=2,

and so on …

Notice one thing, the difference equation is really a linear approximation of the actual solution (which we know is an exponential). If we had added more terms of the Taylor series, we would have a discrete model closer to the continuous model.

E.2.5) Final thoughts: Dynamical models

So, to recap... our Dynamical Model is,

In general,

Punchline: Given the following dynamical model....

...we know how x evolves (numerically)

But we didn't really get x(t), the solution to the differential equation...

E.2.5.1) Solving the Differential Equation

To get the mathematical expression for x(t), we integrate!

For example: Given the differential equation and its initial condition, find the exact solution.

Solution: Using Leibniz's notation,

We separate the variables,

And we integrate both sides,

And we plug in the initial condition,

And now we solve for x(t),

Warning!

We cannot always integrate and find a solution.
Sometimes we have to rely on numerical methods, because analytical solutions may not even exist!



F) Programming & Simulation (🎦 video 1.10)

Video: https://youtu.be/pUcIt6Qysvs

Introduction to Programming Assignments

Why Complete These Assignments?

  1. Hands-On Learning:
    • Apply concepts from lectures to solve an exciting problem: navigating a mobile robot in a cluttered environment.
  2. MATLAB Skills:
    • Learn and use MATLAB, a valuable tool for engineers.
  3. Robot Testing:
    • Code developed for assignments can be tested directly on Roland's Quick Bot if built during the course.

F.1) Assignment 1: Introduction to MATLAB and Simulator

Objective:

Instructions:

  1. MATLAB Installation:

    • Access to MATLAB provided for the course duration (including a few weeks after it ends).
    • Detailed installation instructions available in the Programming Assignments section.
    • Resources for learning key MATLAB concepts provided.
  2. Simulator:

    • Emulates the Quick Bot, featuring:
      • Two-wheel differential drive.
      • Five infrared sensors.
      • Two-wheel encoders.
    • Assignments involve implementing and testing controllers for the Quick Bot (e.g., a go-to-goal controller).
  3. User Manual:

    • Detailed manual included to guide implementation and testing.

Submission Process:

  1. Submission Interface:

    • Enter login and password for Coursera.
    • Select parts of the assignment to grade.
    • Hit "Submit to Coursera for Grading".
  2. Grading:

    • Simulator runs your code and compares the output to expected results on Coursera servers.
    • Outcomes:
      • Check Mark: Correct output, 100% for that part.
      • X Mark: Incorrect output, 0% for that part.
    • Feedback provided for corrections, and resubmissions are unlimited.


Running the Simulator

Steps:

  1. Download and Unzip:
    • Download the simulator zip file from the course page.
    • Unzip and navigate to the folder in MATLAB.

  1. Launch the Simulator:
    • In the MATLAB command window, type: launch
    • Hit Enter to start the simulator.

Hit Play to Start the Simulation,

  1. Simulator Features:
    • Displays the Quick Bot and environment.
    • Quick Bot features:
      • Two wheels.
      • Five infrared sensors:
        • Blue: No obstacle detected.
        • Red: Obstacle detected.
    • Camera Controls:
      • Zoom in/out and pan around the environment.
    • Simulation Controls:
      • Pause, resume, or restart the simulation.

  1. Submission Command:
    • Type: submit
    • Enter Coursera login and password, then submit.


Key Notes for Assignment 1

Conclusion



G) Hardware Notes (🎦 video 1.11)

Video: https://youtu.be/-fX92q1hd4E



Previous Part | Next Part 🔜

↩️ Go Back


Z) 🗃️ Glossary

File Definition

↩️ Go Back