Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models

#Physics #Engineering #Robotics #Mobile_Robots #Control_Theory

Related:

🔙 Previous Part | Next Part 🔜

↩️ Go Back

Table of Contents:


A) Introduction to Module 2: Control of Mobile Robots (🎦 video 2.1)

Welcome to Module 2 of the course on Control of Mobile Robots.

In Module 1, we discussed control theory as a general tool for managing systems.

In this module, we will focus specifically on mobile robots as the target application.


A.1) How Do We Drive a Robot From Point A to Point B?

Key Question: How Do We Drive a Robot From Point A to Point B?

Consider moving a robot from a point A to a point B.

|400

Before answering how to achieve this, we must first identify what is needed:

  1. Measuring the Goal’s Position and Turning it into Control Action:

    • The robot must know where the target is.
    • This information must then be transformed into a control action.

      Note: *Designing the Controller:
    • We already have some foundational knowledge of controller design.
    • The controller will translate goal information into actionable commands.
  2. Understanding the Robot’s Sensors:

    • To know about the environment and itself, the robot relies on sensors.
    • We’ll explore sensor models, which:
      • Abstractly represent how sensors gather information.
      • Ensure the data is trustworthy for our controllers.
  3. Modeling the Robot:

    • To predict how the robot will behave under certain commands, we need robot models.

A.2) Goals of this Module (Differential Drive Robot)

For this module, we’ll focus on differential drive robot models and their dynamics.

Instead, we’ll use abstract sensor models that provide sufficient information for the task at hand.


A.3) Key Realizations About Robot Control (The World is Dynamic)

When designing controllers for robots navigating real-world environments, there are two crucial truths:

  1. The World is Fundamentally Unknown:

    • You can’t know the exact position of every obstacle (e.g., chairs in a building or trees in a forest).
  2. The World is Dynamic:

    • People move chairs, and wind makes trees sway. Therefore, static, pre-planned controllers often fail.

A.3.1) Introduction to Behavior-Based Control (Divide and Conquer)

To address these challenges, roboticists divide tasks into smaller behaviors.

Each behavior represents a primitive control action tailored to specific situations.

Examples of Behaviors:

These behaviors can be combined to handle complex scenarios.


A.4) Behavior Demonstration Through Examples

A.4.1) Example1: Behavior vs. Planner-Based Approach

Planner-Based: When new information arises, the robot recalculates an entire plan, causing delays.

Behavior-Based: When new information (e.g., an obstacle) appears:

A.4.2) Example2: Dynamic Arc Behaviors

A Segway robot navigates using arc behaviors.


A.5) Motivation: Learning Goals for Module 2

By the end of this module, you will:


B) Differential Drive Robot Model and Unicycle Model (🎦 video 2.2)

B.1) The Differential Drive Robot Model

When designing controllers for robots, we need models to predict and influence how robots behave.

The first model we'll discuss is the Differential Drive Mobile Robot Model, one of the most common in robotics.

B.1.1) Wheels of a Differential Drive Robot

Note

The following graphic represents the parts of a mobile robot that are important for the kinematic model.

A differential drive robot has:

  1. Two wheels that can spin independently at different velocities.
  2. An optional caster wheel for balance (usually at the back).

By controlling the velocities of the left wheel (VL) and right wheel (VR):

B.1.2) Key Parameters: the Wheelbase and Wheel Radius

To model this robot, you need:

  1. Wheelbase (L):
    • The distance between the two wheels.
    • Determines how tight the robot can turn.
  2. Wheel Radius (R):
    • The size of the wheels.
    • Affects the relationship between wheel rotation and the robot's movement.

These parameters are straightforward to measure with tools like rulers.


B.1.3) What are we going to control in this model? (States and Inputs)

  1. Inputs:
    • VR: Velocity of the right wheel.
    • VL: Velocity of the left wheel.

  1. States:
    • Position (X,Y): Where the robot is in the environment.
    • Orientation (ϕ): The direction the robot is facing.


B.1.4) Mathematical Model of a Differential Drive Robot (Forward Kinematics)

The differential drive kinematic equations map vR and vL to the state changes x, y, ϕ.

These are represented as:

The equations are:

x˙=R2(ωR+ωL)cos(ϕ)y˙=R2(ωR+ωL)sin(ϕ)ϕ˙=RL(ωRωL)

The linear velocities of the wheels are defined as:

vR=RωRvL=RωL

Where:

|300

Here are the equations rewritten with linear velocities vR​ and vL​ substituted in place of angular velocities:

x˙=12(vR+vL)cos(ϕ)y˙=12(vR+vL)sin(ϕ)ϕ˙=1L(vRvL)

Note: To learn how these equations are obtained, read Chapter 3 - Wheeled Kinematics (by Paul Furgale)
Forward Kinematics:

Differential Drive Model:

While the Forward Kinematics are very useful for implementation, this model is cumbersome for designing controllers.

To make controller design more intuitive, we will use a simpler model...


B.2) Making Control Design Intuitive: using the Unicycle Model

To make controller design more intuitive, we use the Unicycle Model, which focuses on the robot’s position and heading rather than wheel velocities.

B.2.1) Inputs for the Unicycle Model

  1. Translational Velocity (v):
    • How fast the robot is moving forward.
  2. Angular Velocity (ω):
    • How quickly the robot is turning.

These inputs are easier to understand and control:


B.2.2) Unicycle Model Equations (Forward Kinematics)

The equations for the unicycle model are:

x˙=vcos(ϕ)y˙=vsin(ϕ)ϕ˙=ω


B.3) Connecting the Two Models

While the unicycle model is ideal for design, the robot’s actual hardware uses vR and vL.

To bridge this gap:

  1. Derive relationships between v, ω and vR, vL:

  2. Solve the system of equations for ωR and ωL:

    ωR=2v+ωL2RωL=2vωL2R

These equations let you map designed velocities (v,ω) to hardware commands (ωR,ωL).

We have a Model now. What’s Next?
Now that we have a model to translate high-level commands into hardware instructions, the next step is understanding the robot's environment:

  1. How does the robot perceive the world?
  2. What sensors does it use to gather information?

Stay tuned as we explore sensor models and how robots interact with their surroundings.


C) Odometry and Wheel Encoders in Differential Drive Robots (🎦 video 2.3)

C.1) Why Do We Need Odometry?

To control a robot effectively, we need to know its state, which includes:

  1. Position (x, y): Where the robot is.
  2. Orientation (ϕ): The direction the robot is facing.

Odometry provides a means to estimate the pose (position + orientation) of the robot by using sensors.


C.2) Types of Sensors for Odometry

  1. External Sensors:
    • Measure external landmarks or signals.
    • Examples:
      • Ultrasound, Infrared, Cameras, Laser Scanners: Detect objects in the environment.
      • GPS: Provides global position (useful outdoors but not indoors).

Example: LiDAR
|120
|420

  1. Internal Sensors:
    • Located within the robot.
    • Examples:
      • Compass: Measures orientation.
      • Accelerometers and Gyroscopes: Estimate position and orientation over time.
      • Wheel Encoders: Measure the rotation of the wheels to estimate distance traveled.

Example: Wheel Encoder
|300


C.2.1) Example: using Wheel Encoders for Odometry

Wheel encoders are a common method for odometry in differential drive robots.

|230

|230

Step1) Calculate the Arc Lenght

They measure:

  1. The distance traveled by each wheel:

    • Left Wheel Distance (DL):
    • Right Wheel Distance (DR):
    • DC represents the distance traveled by the center of the robot.



    Note: Assumptions for Accuracy:
    - Each wheel follows a constant arc.
    - Velocities (V and ω) are constant over short time intervals.

Based in the Unicycle Model, for the controller we only care about the center of the Robot,

In other words, we only care about the arc length DC

We can use the Arc Length Formula to calculate it,

We can define DC to be the following,

DC=DL+DR2


Step2) Derive Odometry Equations

To update the robot’s pose (x, y, ϕ)

over a very small time interval Δt, we use:

  1. Position Updates:x=x+Δxy=y+Δy
    • assume DC to be so small, that we can consider it as straight line,
      |400

We get this approximation,

x=x+DCcos(ϕ)y=y+DCsin(ϕ)
  1. Orientation Update:
ϕ=ϕ+Δϕ

from our Kinematic model, we know that,

Δϕ=1L(vRvL)Δt

We can define the velocities as follows,

vR=DRΔtvL=DLΔt

then we substitute in the equation,

Δϕ=1L(DRDL)

And finally, we write the orientation update as,

ϕ=ϕ+DRDLL

Summary:


C.2.2) How Do We Compute the Wheels Displacement using the Encoder?

How Do We Compute DL and DR?

  1. Tick Counts:

    • Each wheel encoder has N ticks per revolution (e.g., 2π radians = N ticks).
    • Encoders provide a total tick count.
  2. Change in Ticks in each Wheel in a small Time Interval:

ΔticksR=New Tick CountROld Tick CountRΔticksL=New Tick CountLOld Tick CountL
  1. Distance Traveled by each Wheel:
DR=2πRΔticksRNDL=2πRΔticksLN

Using these formulas, we can compute DL and DR, then use the odometry equations to update:
x, y, and ϕ.


C.3) Challenges with Wheel Encoder Odometry (Drift!)

  1. Drift:

    • Small errors accumulate over time, leading to significant inaccuracies.
    • Causes:
      • Wheel slippage (e.g., spinning without touching the ground).
      • Uneven surfaces.
    • Example: A robot may become confused about its position due to accumulated drift.
  2. Incomplete Information:

    • Wheel encoders alone cannot account for all external factors (e.g., obstacles, inclines).

Takeaways:

Next Steps
In upcoming modules, we will:

  1. Explore sensor fusion techniques to combine data from multiple sensors.
  2. Implement controllers that can adapt to the robot's perceived state.

D) Understanding Range Sensors and the Disc Abstraction (🎦 video 2.4)

D.1) Why Do Robots Need Environmental Information?

In addition to knowing their pose (x,y,ϕ), robots must understand their environment:

  1. Detect obstacles and other objects.
  2. Localize themselves within the world.

This is achieved using sensors, which we will abstract for simplicity.


D.2) Types of Sensors Used For Environmental Information

  1. External Sensors:

    • Gather information about the world.
    • Examples:
      • Range Sensors:
        • Infrared, Ultrasonic, LIDAR (Laser Scanners): Measure distances to obstacles.
      • Vision Sensors: Cameras that capture images or video.
      • Tactile Sensors: Detect physical contact (e.g., bumpers).
      • GPS: Provides global positioning (not suitable indoors).
  2. Internal Sensors:

    • Provide data about the robot’s internal state.
    • Examples:
      • Wheel Encoders: Measure wheel rotation to estimate movement.
      • Inertial Measurement Units (IMUs): Include accelerometers and gyroscopes for motion detection.

D.2.1) External Sensors: Range Sensor Skirt

What is a Sensor Skirt?
A range sensor skirt is a collection of range sensors arranged around the robot.

These sensors measure:

Examples of Range Sensors:


D.3) The Disc Abstraction (Simple Sensor Model)

Simplifying the Sensor Model
Instead of modeling individual sensors, we assume:

  1. The robot has a circular detection area (a disc) around it.

  2. Obstacles within the disc are detected with:

    • Distance (d): To the obstacle.
    • Angle (θ): Relative to the robot’s x-axis.

Key Assumptions

  1. The robot’s coordinate system is centered at its position (x,y).

  2. The angle θ is measured relative to the robot’s heading (ϕ).


D.3.1) Example: Calculating Obstacle Positions

If the robot knows its pose (x,y,ϕ) and the position of obstacles in its local frame, it can compute the global position of obstacles:

Example: Global Obstacle Position
For an obstacle at a distance d1 and angle ϕ1:


D.3.2) Example: Applications of the Disc Abstraction (Rendezvous Problem)

Rendezvous Problem:
Robots must meet at a common location without communication.

Using the disc abstraction:

  1. Each robot:
    • Detects neighbors within its disc.
    • Computes the center of gravity (CoG) of neighbors.
    • Moves toward the CoG.
  2. Result: Robots converge to the same point.

Behavior


D.4) Summary and Next Steps

**Summary:

  1. Robot Models:

    • Differential drive models.
    • Unicycle models.
  2. Odometry:

    • Uses wheel encoders to estimate position.
  3. Environmental Awareness:

    • Sensors provide distance and direction to obstacles.
    • The disc abstraction simplifies sensor modeling.

Next Steps:
Now that we have:

  1. A robot model.
  2. A method to estimate pose.
  3. An abstraction for environmental sensing.

We can begin designing controllers that use this information to navigate and interact with the environment effectively.


E) Behavior-Based Robotics (🎦 video 2.5)

E.1) Why Over-Planning Doesn't Work for Dynamic Worlds

Example: Industrial Robots vs. Exploratory Robots


E.2) Behavior-Based Control: Designing Behaviors for Dynamic Environments

Instead of over-planning:

  1. Develop a library of useful behaviors, such as:
    • Go to Goal: Drive towards a target.
    • Avoid Obstacles: Reactively steer clear of obstacles.
    • Other Behaviors: Follow walls, track moving objects, etc.
  2. Switch between behaviors dynamically based on the environment:
    • Example: Detecting a power outlet when low on battery switches to a "Recharge" behavior.

E.3) Example: Designing a Behavior: Go to Goal

E.3.1) Robot Dynamics: Constant Speed Differential Drive Robot Model

Problem Setup

Constant Speed Differential Drive Robot Model

Note: Heading Dynamics: The robot's change in heading (ϕ˙) is controlled by angular velocity (ω).

So, our Inputs are:

How can we control ω ?

Error Calculation: Define the error: $$ e = \phi_d - \phi $$


E.3.2) Control Design: PID Regulator

The angular velocity (ω) is computed as:

Where:

  1. Kp (Proportional Gain):
    • Reacts to the current error.
    • Larger Kp means faster response but risks oscillations.
  2. Ki (Integral Gain):
    • Accumulates past errors to reduce steady-state error.
    • Large Ki can also cause oscillations.
  3. Kd (Derivative Gain):
    • Reacts to the rate of change of error, improving responsiveness.
    • Can amplify noise.


E.3.3) Caution: Handling Angle Wrapping (using atan2())

The Issue

The Solution

We are basically doing this,

but atan2(x,y) makes sure that we do not loose information about the sign due to the division in atan(x). And its result is always between π and π:


Gráfica:
|500


E.3.4) Conclusion: Behavior Go to Goal accomplished with a PID and atan2

The robot must steer to align with the direction of the goal.

  1. Dynamics:

    • ω=Kpe+Kiedt+Kddedt
    • e ensures the error is within [π,π].
  2. Key Considerations:

    • Use PID control for smooth alignment.
    • Handle angles correctly with wrapping.

Video: https://youtu.be/4Y7zG48uHRo


E.4) Combining Behaviors (Go to Goal + Avoid Obstacles)

To navigate effectively, the robot needs multiple behaviors:

  1. Go to Goal: Drives toward a target location.
  2. Avoid Obstacles: Reacts dynamically to obstacles.
  3. Additional behaviors as needed.

Next Steps

  1. Develop controllers for individual behaviors.
  2. Test behaviors in simulated and real environments.
  3. Combine behaviors dynamically for robust navigation.

F) Designing the "Go to Goal" Behavior (🎦 video 2.6)

F.1) Overview: Combining Behaviors (Go to Goal + Avoid Obstacles)

To move a robot from Point A to Point B,

we need:

  1. A Go to Goal behavior to steer the robot toward the target.
  2. Possibly an Avoid Obstacle behavior for dynamic environments.

For a differential drive robot modeled as a unicycle:


F.2) Error and Controller Design

F.2.1) Recap: using atan2 to wrap the error

Error Definition
The error (e) is defined as the difference between the desired heading (ϕd) and the current heading (ϕ):

Handling Angles
Since e represents an angle, it must be kept within [π,π]. Use the atan2 function to wrap the error:

F.2.2) Calculate the Desired Heading based on the Goal

Desired Heading
The desired heading (ϕd) is calculated using the robot's current position (x,y) and the goal position (xgoal,ygoal):

Then our controller would look something like this:


F.3) Example: Proportional Controller Implementation (Go-to-Goal)

Using a Proportional Controller:

Where:

Implementation Steps

  1. Calculate ϕd using the position of the robot and the goal.
  2. Compute the error e=ϕdϕ.
  3. Wrap e using the atan2 function.
  4. Use the controller to compute ω.


F.3.1) Attempts and Insights (Go to Goal)

Attempt 1: Forgetting Angle Wrapping

Attempt 2: Low Gain (Kp Too Small)

Attempt 3: Adjusted Gain (Kp Increased)


F.4) Demo: Behavior in Action

This approach was used in a competition:

  1. Robots moved toward the goal using the Go to Goal behavior.
  2. A simple Proportional Controller (ω=Kpe) was sufficient to handle the heading adjustments.
  3. Success depended on properly tuned Kp and accurate handling of angles.


F.5) Key Takeaways

  1. Always handle angles carefully by wrapping them to [π,π].
  2. The proportional gain (Kp) significantly impacts the robot’s ability to reach the goal.
  3. This approach is simple yet effective, especially when combined with additional behaviors like Avoid Obstacle.

G) The GRITS Simulator (Tuning Parameters in Matlab) (🎦 video 2.7)

G.1) Overview: Tuning Parameters for Control Design

In control design, parameter tuning is a crucial step. Parameters like those in a PID controller are typically adjusted through testing and simulation.

Here, we focus on:

  1. Simulating the Go to Goal behavior.
  2. Using a PID regulator for heading control.
  3. Iteratively tuning the parameters to achieve desirable performance.

G.2) Matlab Simulation Setup

Robot and Environment

Initial Goal


G.2.1) Initial Control Parameters

Observations

  1. The robot’s actual heading (blue line) overshot the desired heading (red dotted line).
  2. Although the robot eventually stabilized, the overshoot indicated poor control performance.


G.2.2) Refining Control Parameters

Adjustments

  1. Reduce Integral Gain: The high Ki=10 was unnecessary, as precision was not a priority.
    • Set Ki=0 to eliminate overshoot caused by integral windup.
  2. Keep P and D Gains:
    • Kp=10: Maintains responsiveness.
    • Kd=0: No derivative action for simplicity.

Simulation Results


G.3) Real Robot Implementation

Hardware Setup

Controller Parameters

Results


G.4) Key Takeaways: Simulation

  1. Simulation First: Simulations are invaluable for testing and refining control designs before deployment.
  2. Parameter Tuning:
    • Excessive Ki can lead to overshoot. Reduce it unless precise tracking is essential.
    • Start with simple controllers (e.g., pure proportional) before introducing complexity.
  3. Real-World Validation: Always test controllers on actual hardware to confirm performance in dynamic conditions.

H) Introduction to Obstacle Avoidance Behaviors and Arbitration Mechanisms (🎦 video 2.8)

H.1) Introduction: Obstacle Avoidance


H.2) Model Setup: Encountering Obstacles

Scenario

Known Quantities


H.2.1) Behavior Definitions: Obstacle Avoidance Directions

Obstacle Avoidance Directions:

  1. Pure Avoidance (Skittish/Paranoid):
    • Move directly away from the obstacle.
    • Desired heading:ϕavoid=ϕobs+π
    • Here:ϕobs=arctan(yobsyxobsx)
    • Adding ( π ) ensures the robot moves in the opposite direction of the obstacle.

  1. Perpendicular Avoidance:
    • Move perpendicular to the obstacle direction:ϕperp=ϕobs±π2

  1. Pure Go to Goal:
    • Ignore the obstacle entirely and head directly to the goal:ϕgoal=arctan(ygoalyxgoalx)

  1. Blended Approach:
    • Combine goal-seeking with obstacle avoidance to generate a desired heading:ϕblended=w1ϕgoal+w2ϕavoid
    • Weights ( w1 ) and ( w2 ) determine the influence of each behavior.


H.3) Arbitration Mechanisms

When combining Go to Goal and Obstacle Avoidance, two approaches exist:

  1. Winner Takes All (Hard Switch):

    • Switch between behaviors based on the situation:
      • Use Go to Goal when no obstacles are nearby.
      • Use Obstacle Avoidance when close to an obstacle.

    Advantages:

    • Simpler analysis.
    • Each behavior can be designed independently.

    Disadvantages:

    • Sudden switches can lead to inefficient or unnatural movements.
  2. Blended Behaviors:

    • Continuously combine behaviors to produce smooth transitions.
    • Example:
      • Steer slightly away from an obstacle while maintaining progress toward the goal.

    Advantages:

    • Smoother trajectories.
    • More natural behavior.

    Disadvantages:

    • More complex to analyze and design.

H.4) Key Takeaways: Arbitration Mechanisms

  1. Flexibility in Design:
    • There is no single correct way to handle obstacle avoidance. The choice depends on the application and environment.
  2. Analysis vs. Performance:
    • Hard Switches: Easier to analyze but may reduce performance.
    • Blended Behaviors: Improve performance but complicate analysis.
  3. Future Focus:
    • Module 3 will revisit the behaviors and refine them using linear systems theory for a more systematic and formal design.

Conclusion
The combination of Go to Goal and Obstacle Avoidance forms the foundation for mobile robot navigation. Understanding and implementing arbitration mechanisms is critical for ensuring smooth and efficient robot movement.


I) Glue Lecture 2 - Robot Models and Feedback Control (🎦 video 2.9)

Video: https://youtu.be/-7erKJdb-6o

I.1) Key Topics: Module 2

  1. Differential Drive Model vs. Simplified Model
  2. Mapping Between Models
  3. Importance of Feedback Control
  4. Odometry and Wheel Encoders
  5. Practical Example

I.2) Differential Drive vs. Simplified Model

I.2.1) Differential Drive Model

I.2.2) Simplified Model (Unicycle Model)


I.3) Mapping Between Models

To convert from V and ω to ωR and ωL, use:

Example: Spinning in Place


I.4) Importance of Feedback Control

I.4.1) Why Not Use a Perfect Model? (Open-Loop Systems)

I.4.2) Key Concept: Feedback Control (Closed-Loop-Systems)

Relation to PID Control:


I.5) Odometry and Wheel Encoders

How Wheel Encoders Work

Updating Robot’s State
Using odometry equations:


I.5.1) Practical Example: Odometry

Scenario

Steps

  1. Calculate distances moved by wheels:

    • DR=2π0.0536010
    • DL=2π0.053606
  2. Compute center distance:

    • DC=DL+DR2
  3. Update state using odometry equations:

    • X=X+DCcos(ϕ)
    • Y=Y+DCsin(ϕ)
    • ϕ=ϕ+DRDLL


I.6) Key Takeaway: Why Use Encoders Instead of Perfect Models?



J) Programming & Simulation (🎦 video 2.10)

Video: https://youtu.be/tjnCFrzdcO4

This week’s assignments involve:

  1. Transforming unicycle dynamics to differential drive dynamics.
  2. Implementing odometry to estimate the robot's position and orientation.
  3. Utilizing infrared sensors to measure distances to obstacles.


J.1) Simulator Structure

Quick Bot Overview:

The simulation involves several components working together at every time step:

  1. Timer Object inside the Simulator:
    • Calls the execute function in the Supervisor.
  2. Supervisor (decision maker):
    • Gathers sensor information from the robot (IR, Ticks).
    • Selects a controller and sends sensor data, along with the robot's estimated position and orientation, to the controller (Inputs).
    • Updates the robot’s position and orientation estimate.
  3. Controller:
    • Computes linear and angular velocities (v,w).
    • Converts these velocities to left and right wheel speeds (vel_r, vel_l).
  4. Robot:
    • Receives wheel speed commands from the controller.


J.2) Assignment Breakdown

Part 1: Differential Drive Dynamics

Task: Convert linear and angular velocities to left and right wheel speeds.

Implementation:

Skeleton Code:
- Provides R (wheel radius) and L (distance between wheels).
- Use these to calculate the required speeds.

Part 2: Odometry

Task: Estimate the robot's position and orientation using wheel encoder data.

Implementation:

Part 3: Infrared Sensor Distance Conversion

Task: Convert sensor readings to distances.

Note: non linear function,

Implementation:


J.3) Testing and Debugging

Example Simulation:

Tips for Success:

  1. Read the Manual:
    • Refer to Week 2’s section in the manual for detailed guidance.
  2. Use Debugging Tools:
    • Enable and add fprintf statements in the provided code to monitor intermediate values like position estimates.
  3. Visual Validation:
    • Observe the robot's movement in the simulator to ensure it behaves as expected.


K) Hardware Notes (🎦 video 2.11)

Video: https://youtu.be/16RsgdwjROE


🔙 Previous Part | Next Part 🔜

↩️ Go Back


Z) 🗃️ Glossary

File Definition
Uncreated files Origin Note
atan2 Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
behaviors Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
differential drive robot models Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
dynamics Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Forward Kinematics Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Forward Kinematics Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
kinematic equations Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
odometry Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Odometry Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Odometry Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
perception Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
PID Controller Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Rotary Encoders Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Unicycle Model Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models
Unicycle Model Module 2 - Introduction to Mobile Robotics and Wheeled Robot Models