Module 1 - Introduction to Controls and Dynamical Models
#Physics #Engineering #Control_Theory #Robotics
Previous Part | Next Part 🔜
↩️ Go Back
Table of Contents:
- A) Introduction: Mobile Robots and Control Theory (🎦 video 1.1)
- B) Course Content
- C.1) Course Structure
- D) Install the GUI Layout Toolbox for MATLAB
- A) What is Control Theory anyway? (🎦 video 1.2)
- B) On the Need of Mathematical Models (🎦 video 1.3)
- C) Cruise Controller - Example (🎦 video 1.4)
- C.2) Control Design Basics - Comparing Control Designs (🎦 video 1.5)
- C.3) PID Control (🎦 video 1.7)
- C.4) Fun Facts of the PID
- C.5) Tuning the PID gains for the Cruise-Controller (Trial and Error)
- D) PID Control - Implementation: Quadrator Example (🎦 video 1.8)
- E) Glue Lecture 1 (🎦 video 1.9)
- Z) Glossary
A) Introduction: Mobile Robots and Control Theory (🎦 video 1.1)
"Welcome to the course, my name is Magnus Egested and I'm a professor at Georgia Tech in the school of Electrical and Computer Engineering"
The focus of this course is,
- how do you make "mobile Robots" move in effective, safe and predictable ways?
Meaning that they don't slam into obtacles, they move smoothy and nicely without having weird oscillations in their motion.
The trick to achieve this is something called "Control Theory". This first module of the course is going to focus on "controls" and "control theory"
"But first let me talk you about my own research....
My research hammer is Control Theory, it deals with how do you make things move or behave well."
This could be:
-
stock markets
-
cruise controllers
-
limiting the spread of epidemics (for instance, using vaccination strategies).
Control Theory is not only for robotics. It's a general language for describing how things behave.
A.1) Types of Robotics (examples)
"Control Theory is my hammer, but my nail is Robotics. I want robots to be elegant and useful.
In my lab we do 3 types of robotics:"
A.1.1) Swarm Robotics
1) Swarm Robotics, this is how do you get lots and lots of robots to do exciting things together.
Swarm Robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots.
It is supposed that a desired collective behavior emerges from the interactions between the robots and interactions of robots with the environment. This approach emerged on the field of artificial swarm intelligence, as well as the biological studies of insects, ants and other fields in nature, where swarm behaviour occurs.
A.1.2) Behavior Based Robotics
2) Behavior Based Control (or Behavior-based robotics (BBR)), we will see quite a bit of this in this course.
A Behavior is basically a controller that is in charge of some task like:
- going towards a landmark
- not slamming into obstacles
- following corridors
- etc
The question is, how do you design these behaviors and how do you patch them together?
Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links.
A.1.3) Field Robotics
3) Field Robotics, this is how do you take robots and put them out in the unstructured, strange world outside of the lab.
Meaning, if you have a robot driving through a forest, all of a sudden there are no corridors, hallways or doors and life becomes more complicated.
Field Robotics aims to bring technologies that allow autonomous systems to assist and/or replace humans performing tasks that are difficult, repetitive, unpleasant, or take place in hazardous environments.
A.2) Using Control Theory in Robotics
Control Theory is a key role when we want to learn how to control systems. First we will learn it in general (without robotics).
But then we are going to study the robotic applications by looking at robot models (because not all robots are the same)
Some robots are:
-
Hopping
-
Slithering
-
Rolling
-
Flying
And different type of robots require different type of controllers.
B) Course Content
What is in the course?
We are going to look in general at Mobility, meaning, how do you make robots move?
(That is the main question we are interested in this course).
And then we will end with applications.
What is NOT in the course?
- The course is not about "Artificial Intelligence"", which is the high level planing . In other words, what should the robots actually be doing?*
For us, the questions:
- what should the robots be doing?
- how should they be doing it?
Have already been figured out by someone else.
- We are not going to deal much with "Perception", meaning
- *how do we take computer motion data and actually turn it into something actable?
- Or how do we actually design Perception algorithms?
We are going to assume that someone else has done this, and we're going to act on the perceptual data that we already have.
- This course is not a "mechanical engineering course", meaning, we are not going to be building robots or dealing with different kinds of robotic designs.
However, we will indeed be looking at some hardware aspects of robotics.
B.1) Course Structure
- Lectures (8 each week, 7 weeks)
- weekly quizzes + glue lectures
- Course material: https://github.com/giandbt/Control-of-Mobile-Robots
-
Sim. I. am MATLAB robot simulator
-
Build your own robot
C) Install the GUI Layout Toolbox for MATLAB
Original Link: https://www.mathworks.com/matlabcentral/fileexhange/47982-gui-layout-toolbox
Alternative Link: https://github.com/giandbt/Control-of-Mobile-Robots/tree/master/Tools
C.1) Learn More About MATLAB <-------------------------
https://www.mathworks.com/learn/tutorials/matlab-onramp.html
A) What is Control Theory anyway? (🎦 video 1.2)
Video
Control theory is an interdisciplinary field of engineering and mathematics, which has to do with the behavior of Dynamic Systems. The input of a system is called a reference. When one or more output variables of a system need to follow a certain reference over time, a controller manipulates the input to the system to obtain the desired effect on the output of the system feedback).
So remember,
- System: it is something that changes over time.
For example: Robots, Epidemics, stock markets, thermostats, circuits, Engines, Power grids, autopilots, etc.
In the study of dynamic systems, which are systems that change over time, there's a fascinating concept called the Lorenz attractor.
The Lorenz attractor is a set of chaotic solutions to the Lorenz oscillator equations (a set of equations by Edward Lorenz in 1963). When visualized, the Lorenz attractor looks like a butterfly or figure-eight. It's one of the most famous images associated with chaos theory.
This system shows that even simple equations can lead to unpredictable, chaotic behavior. It's a prime example of the "butterfly effect," where tiny changes can lead to vastly different outcomes!
- Control: it deals with how can we influence the change of a system.
The centrifugal governor, a device with spinning balls used to regulate engine speed, was a brilliant invention by James Watt for his steam engines. It's one of the earliest examples of a feedback control system!
When the engine runs too fast, the balls rise and reduce the steam input, slowing it down. If it's too slow, the balls drop, allowing more steam and speeding it up. This self-correcting mechanism laid the foundation for modern control systems, showcasing how systems can adjust themselves to maintain a desired state!
A.1) The Basic Building Blocks of a Control System
Let's start with trying to build up a "Control System" in terms of the basic needed building blocks.
A.1.1) The State of the System (x)
1st) We need some way of describing what the system is doing or where it is (in its behaviors. To do so, we define the "state" of the dynamic system.
The State is a representation of what the system is currently doing.
And we are going to use an "x" to describe what the state of the system is.
The State could be:
-
The position and velocity of a robot
-
The percentage of people infected by certain epidemic.
-
etc
A.1.2) The Dynamics of the Sytem
2nd) Now that we know what the state of a system is, the description of the change of the state as a function of time is known as the "dynamics" of the system.
It tells us what the system is actually doing.
The Dynamics is a description of how the state changes
A.1.3) The Reference Signal (r)
3th) Now that we have the building blocks of a dynamical system, we want some way of influencing its behavior so that it does what we want. We are going to introduce a "reference signal in order to tell the system what it is we want it to do.
The Reference is what we want the system to do.
And we are going to use an "r" for the reference
The reference could be:
- setting the cruis controller to 60 mph
- making a certain amount of money in the stock market
- make the temperature of a room 70° F
- etc
A.1.4) The Output of the System (y)
4th) How can I know if the behavior of my system is close to the reference signal? We need to measure the "output" of the system, now we can compare.
The Output is the measurement of (some aspects of) the system
And we are going to use a "y" for the output.
Note: Since we can't always measure the "state". The "outputs" are things that we are able to get out of the system, things we can measure.
A.1.5) The System Input (u)
5th) Now we need some way of mapping "reference signals" into actual "control signals" (also known as "inputs").
The Input is the control signal
And we are going to use "u" to for the input which is going to take the reference and produce a control signal that then hits the state of the system.
A.1.6) The Feedback and the Error Signal
6th) A good control design uses the measurements of the system in order to produce a proper control signal. So we need a final building block known as "feedback"
The Feedback is the mapping from outputs to inputs
We can take the difference between the reference signal and the output signal and produce an “ error signal", which can be translated into a control signal.
This is known as a "Closed loop system"
A.1.7) Summary - The Basic Building Blocks
B) On the Need of Mathematical Models (🎦 video 1.3)
Video
We have seen that "controls" deals with "dynamic systems" in generality, and robotics is one facet of this.
Now, we will make sense of what this means with a precise language like mathemantics. And to do this, we are going to need “models”
Models are an approximation, an abstraction of what the actual system is doing. And the "Control design" is going to be done relation to that model and then deployed on the **"real system"
The main question in "Control Theory" is:
- How do we pick the input signal "u"? In order to match the system output "y" with the reference "r"
B.1) Main "Performance Objectives" in Control Design
There are many objectives we can achieve when choosing a control signal (we will list them from the most critical to the less critical).
-
Stability Objective: loosely speaking, it's when the system doesn't blow up. If you design a controller that makes the system go unstable, then no other objective matters because you have failed.
-
Tracking Objective: which means we can make the system output "y" be equal to the reference "r". This means we can make robots follow paths and self driving cars follow roads.
-
Robustness Objective: this means that our controller (that was designed with a model), also works with the real system.
Remember that models are never going to be perfect, so we can't overcommit to a particular model and we can't have our controller be too highly dependent on what the particular parameters of a model are.
We need to design controllers that are "immune" to variations across parameters in the model.
- Disturbance Rejection Objective: Similar to robustness, the real would brings more complications. The controller has to be able to overcome at least a reasonable disturbances for the controller to be effective.
These disturbances can be:
- measurement noise
- external loads (e.g. "wind")
- friction
- Optimality Objective: this answers the questions - How do we do something the best possible way? And "best" can mean different things like: being the fastest, or using the least amount of fuel, etc.
B.2) Dynamical Models
So… What do these models look like?
B.2.1) Models in Discrete Time
Let's start in "Discrete time",
We can describe
We can understand this with the simplest example of a "Discrete Time System", a clock.
We can also plot this system.
It is a trivial system, but it is a Difference equation none the less and it changes over time.
Dynamics = Change Over Time
B.2.2) Models in Continuos Time
But we have a problem...
The laws of physics are all in "Continuous time".
Instead of describing the "next" state, we need derivatives with respect to time.
We can now describe the instantaneous rate of change of
Now, what would be the "Differential Equation" of our clock example?
It would be very simple, we would say that the rate of change of a clock would be 1 second each second, or with Newton's notation:
And we can also plot "one solution" of this differencial equation, we consider that the initial conditions of the clock are zero,
When we work with Dynamic Models, we are going to need almost always a continuous time model, because nature is continuous.
But our implementation with robots and computers run in "discrete time", so we need to know how to go from continuous time to discrete time.
B.3) From Continuous to Discrete
We need to sample our continuous model in some time interval
(also known as "Sample time").
In other words,
So the main question is, what is
We can approximate
Read More the Taylor Series here
And if we use the first term only we get,
This is a way of getting a discrete time model from the continuous time model.
And this is how we are going to have to take things we do in continuous time and map it onto the actual implementations of computers that ultimately run in discrete time.
C) Cruise Controller - Example (🎦 video 1.4)
Video
Now that we have a way of describing "Dynamical systems" with "Differential Equations" in "Continuous Time", or "Difference Equations" in "Discrete Time”.
Let's do something interesting and build a "Cruise Controller" for a car.
The job of a "Cruise Controller" is to make a car drive at a desired speed "r"
C.1) Defining the Model for a Car
We can model the car using the Laws of Physics, and we can use Newton's 2nd Law to describe how a car (which has mass) behave.
Now, *what is the state of the system?
We need to somehow relate Newton's 2nd Law to the system and its state.
In this case, since we want the "Cruise Controller" to control the speed of the car, we can say that the state
With this example, we are going to work with "speed", we don't care much about the direction of the car.
But how does this relate with Newton's 2nd Law?
From the equation,
We know that the acceleration is the derivative of velocity (the derivative of our state "
The equation,
So we can use "F" to control the speed of the car.
We can define F as,
with this equation we are mapping "stepping on the gas or the brake" onto the "force" that
is applied to the car. And this is done through some linear relationship.
As it was shown before "c" is the electro-mechanical transmission coefficient.
But…
- what happens if we don't know its value?
- can we still design a controller?
- or what happens if we design a controller for a specific car, will it work with another car with a different "c" value?
Remember, our control design cannot rely on us knowing "c", otherwise it won't be Robust. We can't have our controller be too highly dependent on what the particular parameters of a model are.
The same argument applies to the mass "m" of the car, the number of passengers can change, etc).
Let's now write the differential equation of our system.
We know that,
And if we equate the 2 equation of force that we have we get,
C.2) Defining the Control Signal
- Assume we can measure the velocity of the car.
- The control signal should be a function of the error signal "e"
- Question: What properties should the control signal have for our application case?
-
It should not overreact, in other words: small "e" gives small "u"
-
"u" should not be "jerky", in other word,"u"should not vary too rapidly all the time.
-
"u" should not depend on us knowing "c" and "m" exactly.
C.2) Control Design Basics - Comparing Control Designs (🎦 video 1.5)
Video
Let's continue with our "Cruise Controller" example and design a few controllers and compare them.
We have a "Car Model",
And we want the car to match the reference speed,
Note: In control design, we talk about "Asymptotic Properties", than means when
which implies that,
We can also see it in the block diagram,
C.2.1) Attempt 1 - Bang-Bang Controller
Is this control strategy going to work?
Let's run a simulation with a reference speed r = 70
It works beautifully!
The system is stable, and we have reached the desired speed without any overshoot.
This type of control is known as "Bang-Bang Control" because we are switching between 2 extremes (full gas and full brake)
Are we done? Not really, let's see the control effort with our control signal
This is BAD!
First we accelerate until we reach the speed of 70 and then we start switching wildly between Umax and -Umax
Why? Because the system is to sensitive to any change in the measured speed and it will reach with full gas or full brake to compensate.
This will be a Bumpy ride and we will burn the actuators
This is not a good control design
Problem: The controller over-reacts to small errors
C.2.2) Attempt 2 - P-Regulator
This control design solves our previous problem, because a small error yields a small control signal. It gives a nice and smooth behavior.
It is known as a "P-Regulator", where "P" stands for "proportional" because the input "u" is directly proportional to the error through a controlled gain "kp"
Is this control strategy going to work?
Let's run a simulation with a reference speed r = 70
It is nice, smooth and stable!
But there is a problem, the car never reaches a velocity of 70.
Even though the response is smooth we end up having Steady State Error, because we don't reach the reference.
Why did this happen?
Well, the problem is actually in our car model. We forgot an important force that affects cars: Wind Resistance
C.2.2.1) P-Regulator: Stability but Not Tracking
The "real" model is now augmented to include a wind-resistance term,
This is the model used in the previous simulations, and it is the reason why the P- Regulator failed with our Tracking Objective.
Let's analyse our model at "steady-state", in other words, when
Remember that,
Therefore,
Now we can solve for
C.2.2.2) Control Theory: Recall the "Performance Objectives" (🎦 video 1.6)
Video
Let's recall the first three "Performance Objectives" we have in control and see how we can improve our "P-Regulator"
A controller should provide:
- Stability to the system - also known as BIBO Stability
In other words, if the input is reasonable, our system doesn't blow up.
Our P-Regulator can do this, so that is a win!
- Tracking - means we should get to the reference value that we want (or at least close enough, within a 2% error).
Our "P-Regulator" fails at this because the error at steady state is too much.
The reference is at 70 and we can only achieve 58 due to wind-resistance.
- Robustness - means we shouldn't have to know much about parameters that we really have no way of knowing. And preferably we should be able to fight noise as well.
So, let's try to achieve "Tracking" with a new controller.
C.2.3) Attempt 3 - Forcing a Tracking Controller
What is this new term doing? Let's analyse at steady state,
Remember that,
Therefore, if we substitute for
We have lost "Robustness" with this control approach. Just take a look at our controller "u"
All of a sudden we have to know all these physical parameters that we don't know, so this is not a robust control design.
Let's go back to our "P-Regulator" and see what we can learn.
What is actually happening is that the proportional error is doing a fine job pushing the system up to close to where it should be, but then it can't push hard enough to overcome the effect of wind-resistance.
The error starts to accumulate over time. If we could "collect" all these errors over time and use them in our controller, we would eventually be able to speed up the car.
We can use an "integral" and integrate
C.2.4) Attempt 4 - PI-Regulator
The PI-Regulator is one of the most common regulators found anywhere in the world. In fact, it's almost 2/3 of all commercial grade cruise controllers
Let's see its time response,
It works perfectly!
The system behaves quickly, nice and smoothly.
The car reaches the reference of 70 mph.
It is stable, it can track the reference and it is robust since we don't need to know the parameters of the car.
PI regulators can induce unwanted oscillations in some systems, that is why we also have:
PID - Regulators
This is an extremly useful controller that shows up a lot. It is a great control structure and we are going to get quite good at designing PID regulators.
C.3) PID Control (🎦 video 1.7)
Video
Last time, we saw that the PI-Regulator (or its slightly more elaborated brother, the PID - Regulator) was enough to make the cruise controller achieve Stability, Tracking, and Robustness.
Now we will see what are the effects of the controller gains and why the PID - Regulator is such an important regulator in virtually every industry.
C.3.1) Proportional Gain
In Control Theory and stability theory, root locus analysis is a graphical method for examining how the Roots of a System change with variation of a certain system parameter, commonly a gain within a feedback system.
This is a technique used as a stability criterion in the field of classical control theory developed by Walter R. Evans which can determine stability of the system. The Root Locus plots the Poles of the Closed Loop Transfer Function in the complex S-Plane as a function of a gain parameter
C.3.2) Integral Gain
Integral windup, refers to the situation in a PID feedback controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus Overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). The specific problem is the excess overshooting.
This problem can be addressed by:
- Initializing the controller integral to a desired value, for instance to the value before the problem
- Increasing the setpoint in a suitable ramp
- Disabling the integral function until the to-be-controlled process variable (PV) has entered the controllable region
- Preventing the integral term from accumulating above or below pre-determined bounds
- Back-calculating the integral term to constrain the process output within feasible bounds.
- Zeroing the integral value every time the error is equal to, or crosses zero. This avoids having the controller attempt to drive the system to have the same error integral in the opposite direction as was caused by a perturbation.
Examples:
C.3.2) Derivative Gain
The problem of Derivative Kick happens since error=Setpoint - Input, any change in Setpoint causes an instantaneous change in error. The derivative of this change is infinity (in practice, since dt isn’t 0 it just winds up being a really big number.) This number gets fed into the pid equation, which results in an undesirable spike in the output.
Solution: It turns out that the derivative of the Error is equal to negative derivative of Input, EXCEPT when the Setpoint is changing. This winds up being a perfect solution. Instead of adding (Kd
C.4) Fun Facts of the PID
So… PID is by far the most used "low-level controller"
You may ask: What is the difference between a low level control and a high level control?
Controllers that directly drive the hardware are low-level controllers whereas those that implement logical decision-making are high-level controllers.
The terms "high" and "low" are relative. Nested controllers ensure abstraction and code modularity.
You may also ask: Why does PID work with many systems so well?
Because feedback has a remarkable ability to fight uncertainty in model parameters
With PID, stability is not guaranteed
C.5) Tuning the PID gains for the Cruise-Controller (Trial and Error)
C.5.1) Tuning Kp
We start by tuning the P-Regulator,
Say Kp= 1,
we see that the system is stable, but we have no tracking since we can't reach
Let's Tune now the integral gain.
C.5.2) Tuning Ki
Tuning the integral gain we get,
Say Ki = 1,
The system works great! We are done.
But what happens if me increase the integral gain? 🤔
Can we do it better? 🤔
Let's try with Ki= 10,
Now my system starts oscillating.
So this is an example of where the integral part may actually cause oscillations.
If we see oscillations it may be an indicator that Ki is to high.
Be careful !
C.5.3) Tuning Kd
Now let's tune the "D" part,
Let's try with Kd= 0.1,
In this case, it doesn't actually matter too much.
Notice Kd is small because we know it can make the system sensitive to noise
So, how is this different from the PI - Regulator?
Notice we are getting a faster initial response, but then it is slower towards the end. So maybe a PID is not the best for our particular application.
This is some of the thinking that goes into tweeking PID regulators, in fact there are many methods for tuning the gains of a PID.
So what we are going to do next time is, we are going to go now from this rather abstract math to something that we can actually implement. And we are going to use a PID to control the altitude of a hovering Quadrator.
C.5.4) Extra Content 🎦 - How to Tune a PID Controller by RealPars
D) Discrete PID Implementation: Quadrator Example (🎦 video 1.8)
Video
Now we have a useful general purpose controller, the PID regulator.
We saw how it can be used to make a cruise controller.
What we are missing now is how do we turn all these "mathematics" into "executable code"? How do we make it run on a platform?
Well, first let's remember that computers work in discrete time.
So they have a Sample Time
There is a certain Clock Frequency on the computer and we are sampling at a certain rate.
And what we have to do is take the continuous time PID regulator and have it defined in discrete time.
D.1) Discrete P-term
In continuous time we have,
and the discrete version is trivial, we just need to sample the error,
D.2) Discrete D-term
In continuous time we have,
And we know that the derivative of the error is roughly,
In fact, as
And we also now that we can store in memory the “old error". This means we can sample a "new error”, compute the derivative and then store the "new error" as “old”. So this is a good approximation for
Furthermore, we can modify
Then, our controller will be,
D.3) Discrete I - term
In continuous time we have,
But, **what is the integral geometrically speaking?
And we can approximate this area with rectangles,
And now we write the sum of these rectangles as,
So after some time
Conclusion:
To calculate the approximate integral we just need to compute,
Which can be broken down to,
Then, our discrete controller would be,
But we can modify
Then, our controller will be,
And we just need to update "Enew" every cycle,
D.4) Discrete PID Controller
So our discrete PID controller is,
Now let's put it into pseudo-code:
D.5) Quadrotor Altitude Control (demo)
Let's now implement this PID controller to control the altitude of a hovering quadrotor.
A quadrotor is equipped with many sensors, speed controllers, power and communication systems that make a good flight possible.
Quadrotor Altitude Control: What is the model for altitude Control?
Let's do a demonstration in the lab,
Let's see a PID regulator in action, right now with our rudimentary model of a quadrator we will do altitude control only.
Let's turn it on! We can see that the system is stable (it is not falling down to the ground).
It is drifting a little bit sideways because we are not controlling drift at all.
I can push it down a bit and the controller is able to overcome the load, therefore it is also robust.
In terms of tracking, we would have to see the data of our Ultrasonic Sensor which is measuring the altitude.
E) Glue Lecture 1 - Dynamical Models (🎦 video 1.9)
Video
E.1) Dynamical Models
Q: What is a model?
Ans: A model is something that describes how a system changes (or evolves) with time.
The system could be: a robot
And the model could describe, for example:
- how the position changes with time
- or, how the angles change with time
- etc
Q: what is the idea behind controls?
Ans: The idea behind controls is that we are going to influence this change to make our system do something we want to do,
E.2) Understanding what is a model (ball example)
E.2.1) Exercise in derivatives
Imagine we want to model a ball,
and we know that it's position
and if we graph this function we get the following,
And now we can take the derivative of the position with respect to time,
We also know that the derivative of position is velocity, and if we graph this function we get the following,
We can do this further on, and take one more derivative,
Which we know is the acceleration.
and if we graph this function we get the following,
E.2.2) Equations in Action
In action...
What does these equations mean outside the graphs?
Let's draw the ball and see how its position changes a long some axis with time,
E.2.3) Working with Differential Equations
Given the differential equation,
How do we find
We have two options:
-
We can try to solve the differential equation and get
,
Spoiler Alert: we already know the answer,
-
We can discretize the world and use a Taylor Approximation, in order to get
as we've learned before,
Let's cut the time axis in discrete amounts of
And let's add a counter "k" to jump to different instants of time,
Now, our differential equation is,
And the Taylor Expansion is,
And if he apply it to our example we get,
Therefore,
is the difference equation that approximates our differential equation.
E.2.4) Visualize the difference equation
But, how does this translate into motion?
Let's go through the difference equation step-by-step,
First, we know that
when
when
when
and so on …
Notice one thing, the difference equation is really a linear approximation of the actual solution (which we know is an exponential). If we had added more terms of the Taylor series, we would have a discrete model closer to the continuous model.
E.2.5) Final thoughts: Dynamical models
So, to recap... our Dynamical Model is,
In general,
Punchline: Given the following dynamical model....
...we know how
But we didn't really get
E.2.5.1) Solving the Differential Equation
To get the mathematical expression for
For example: Given the differential equation and its initial condition, find the exact solution.
Solution: Using Leibniz's notation,
We separate the variables,
And we integrate both sides,
And we plug in the initial condition,
And now we solve for
We cannot always integrate and find a solution.
Sometimes we have to rely on numerical methods, because analytical solutions may not even exist!
F) Programming & Simulation (🎦 video 1.10)
Video: https://youtu.be/pUcIt6Qysvs
Introduction to Programming Assignments
- Instructor: JP
- Objective: Apply lecture concepts to control a mobile robot using a MATLAB-based robot simulator called SimIAM.
- Assignments:
- Entirely optional and do not count towards the course grade.
- Can be submitted for feedback.
Why Complete These Assignments?
- Hands-On Learning:
- Apply concepts from lectures to solve an exciting problem: navigating a mobile robot in a cluttered environment.
- MATLAB Skills:
- Learn and use MATLAB, a valuable tool for engineers.
- Robot Testing:
- Code developed for assignments can be tested directly on Roland's Quick Bot if built during the course.
F.1) Assignment 1: Introduction to MATLAB and Simulator
Objective:
- Get familiar with MATLAB.
- Run the simulator at least once.
Instructions:
-
MATLAB Installation:
- Access to MATLAB provided for the course duration (including a few weeks after it ends).
- Detailed installation instructions available in the Programming Assignments section.
- Resources for learning key MATLAB concepts provided.
-
Simulator:
- Emulates the Quick Bot, featuring:
- Two-wheel differential drive.
- Five infrared sensors.
- Two-wheel encoders.
- Assignments involve implementing and testing controllers for the Quick Bot (e.g., a
go-to-goal
controller).
- Emulates the Quick Bot, featuring:
-
User Manual:
- Detailed manual included to guide implementation and testing.
Submission Process:
-
Submission Interface:
- Enter login and password for Coursera.
- Select parts of the assignment to grade.
- Hit "Submit to Coursera for Grading".
-
Grading:
- Simulator runs your code and compares the output to expected results on Coursera servers.
- Outcomes:
- ✅ Check Mark: Correct output, 100% for that part.
- ❌ X Mark: Incorrect output, 0% for that part.
- Feedback provided for corrections, and resubmissions are unlimited.
Running the Simulator
Steps:
- Download and Unzip:
- Download the simulator zip file from the course page.
- Unzip and navigate to the folder in MATLAB.
- Launch the Simulator:
- In the MATLAB command window, type:
launch
- Hit Enter to start the simulator.
- In the MATLAB command window, type:
Hit Play to Start the Simulation,
- Simulator Features:
- Displays the Quick Bot and environment.
- Quick Bot features:
- Two wheels.
- Five infrared sensors:
- Blue: No obstacle detected.
- Red: Obstacle detected.
- Camera Controls:
- Zoom in/out and pan around the environment.
- Simulation Controls:
- Pause, resume, or restart the simulation.
- Submission Command:
- Type:
submit
- Enter Coursera login and password, then submit.
- Type:
Key Notes for Assignment 1
- Task: Run the simulator once.
- Completion:
- If the simulator runs successfully, you’ll receive a ✅ and 100% for the assignment.
- Troubleshooting:
- Use discussion forums for help with bugs or concerns.
Conclusion
- These assignments provide a valuable opportunity to practice programming and simulation.
- Assignment 1 is a simple but crucial step in getting started.
- Good luck with the course and programming assignments!
G) Hardware Notes (🎦 video 1.11)
Video: https://youtu.be/-fX92q1hd4E
Previous Part | Next Part 🔜
Z) 🗃️ Glossary
File | Definition |
---|