E28 Mobile Robotics: Lab 1
Robot Control
in partnership with david benitez, maila sepri and yavor georgiev
03.11.2005

labs home    |    robot control    |    robot localization and mapping


1. abstract

A series of navigation and obstacle-avoidance functions were written to provide the Magellan robots with six basic motion capabilities. These functions were tested using the Nserver 2D simulator for Nomad robots, were then ported to the Magellan robots by adjusting the appropriate motion constants and replacing some functions with their Mage library equivalents. The functions we wrote were then grouped into a Navigation module and integrated with the IPC-based GCM (Global Communications Module) message-passing system in accordance with the multi-layer architecture we are aiming for.

2. basic theory

PROPORTIONAL CONTROL

Each of the tasks in this lab is accomplished using proportional control methods. Proportional controllers use sensor feedback to calculate the difference between the desired and current states for each iteration through the loop. The controller adjusts the robot’s speed and orientation using a signal value that is proportional to the error. For the rotation, translation, arbitrary location seeking, and waypoint following functions, the desired states are the angles and x-y coordinates in functions’ arguments. The current states are taken from the robot’s odometry. For the wall-following and free space-following functions, the control signals are proportional to sonar and IR sensor readings. Since the simulator and Magellan environments are different, we needed two proportionality constants for each task and we adjusted them using trial and error.

SMOOTHING SENSOR READINGS  

Initially, we based all our environment predictions on the sonars, but those proved quite inaccurate at close range, resulting in the robot not being able to safely approach small and non-sold obstacles. To address this, we decided to incorporate the infrared sensor readings as well, fusing them with the sonars. We set a threshold, beyond which the IRs would get too noisy, and would use the sonar readings only if the IRs were returning something above that threshold.

After doing this, we discovered the robot started jerking a lot as it moved forward, which we attributed to the frequent short spikes in the IR stream. To fix this, we decided to filter through every three fused readings in time and always use the minimum one. This way all spikes were cleared away, and we got a much smoother signal as input to our control loop.

MIGRATION BETWEEN SIMULATOR AND MAGELLAN  

Initially we developed all our functions to work in the simulator, but we soon discovered that the Mage library, which the Magellans use has different definitions of some of the functions, and sometimes uses different units. To enable us to port our code between the two systems without having to replace all function calls and recalculate all units, we created wrapper functions using #defines in included header files for all functions differing between the two platforms. The wrappers would be the same, so our programs would not need to be changed when we switched between the simulator and the Magellan; we only needed to switch the implementations for the wrapper functions by changing the include file we use. For example, UNI_VM() translates to a call to scout_vm() with the appropriate units for the Nomad and vm() for the Magellan.

GCM INTEGRATION  

To implement a modular multi-layer architecture, we used the GCM communications system to interface between the different modules. For this lab we implemented the Nav and Control modules. The Control module was just a simple program that would request different functionalities form the Nav module by sending it a message with the task to be completed. Tasks included achieve, angAchieve, wayPointFollow, freeSpaceFollow, wallFollow, etc. The Nav module would receive these messages and invoke the appropriate routines.

3. overview of tasks, functions and algorithms

3.1.Rotation

This function turns the robot to any specified angular position as quickly as possible by rotating it in the direction of the shortest path.

3.1.a. Rotation Algorithm

The function accepts as its argument any angular value (-∞, ∞) in degrees and turns the robot by the number of degrees specified. Large input angles are adjusted using the getDesiredAngle() function so that the robot does not turn more than 180˚ per function call. A proportional controller determines the rotation speed for each time step. The capRotController() function limits the control signal fed to the robot to safe values.

3.1.b. Angle modulation

In order to minimize the rotation distance and time, the rotation, arbitrary location, and waypoint-following functions call a subroutine that converts all angle values into their [-180˚, 180˚] equivalents. The getDesiredAngle() function takes the modulus of the input angle by 360˚ and adds or subtracts 360˚ to the remainder if the remainder’s magnitude is greater or less than 180˚.

3.2.Forward Motion

This function moves the robot forward by the number of inches specified. A proportional controller calculates the translational speed such that the robot arrives at the specified distance quickly and without overshoot. The capTransController() function limits the translational velocity to an appropriate value.

3.3.Attaining an Arbitrary Location and Orientation

This function allows the robot to reach any location and angular orientation in the plane via the shortest path of smooth curves. The function takes as arguments the amount by which the robot should move in the x-, y-, and θ- directions, and it calculates a modified coordinate system to implement proportional controllers for translation and rotation. These controllers move the non-holonomic robot in a smooth arc rather than with a three-step, rotate-translate-rotate method. Nevertheless, the function breaks from the arc mode of control when the robot nears its final position in order to prevent it from doing large loops to achieve the desired orientation before terminating translation.

In order to implement arbitrary location/orientation finding, we employed the algorithm described in Seigwart, Nourbakhsh, pp.83-85, with a few changes to suit our robot.

[1]

where θc denotes the robot’s current orientation with respect to its initial orientation and θe denotes the error, or difference between the desired and current orientations. Then the translation and rotation control signals are defined by

[2]

The algorithm described in Seigwart, Nourbakhsh assumes the robot is at a given location and heads towards the origin, but our implementation directs the robot to its goal position using its initial position as the origin.

3.4.Waypoint Following

This function is an extension of the arbitrary-location attainment function in that it takes a series of (x, y, θ) points, creates an array from them, and moves the robot from one point to the next using the smooth motion algorithm. The inputs specify the amount by which the robot should translate and turn from its current position.

The implementation of this functionality using GCM was a little more complex. Since we were working in a message-passing environment, we needed to first pass all the waypoints as messages to Nav and then tell it to execute the path. For this purpose, Nav would maintain a linked list of waypoints, and would adjust this list according to the messages it receives. If it received an execute request, it would pass the waypoints to the achieve capability one by one and then halt the robot.

3.5.Wall-Following

This function allows the robot to use obstacle-avoidance techniques to navigate out of a maze. The data from sonar sensors is used to maintain a constant distance between the robot and a wall on its right side. Groups of sonar readings from specific directions are used to determine when the robot should turn to round a corner or stop to avoid a barrier.
Let's assume our robot is following the right wall. Our robot operates on the difference between the back-right and front-right sonars and tries to maintain its distance from the wall.

In the simplest case, the robot is just following a straight wall. Since the difference between the front-right and the back-right sonar readings is 0 and the robot is at a safe distance from the wall, the robot keeps moving straight.


Figure 1. following a straight wall.

If the robot approaches the end of a wall and has to turn right, then the difference between the front-right sonar readings and the back-right sonar readings will be a positive number and this will tell the robot to turn towards the right. (see Figure 2)


Figure 2. turning right.

Since the robot operates on the differences between front-right and the back-right sonars, making a U turn around a wall is very similar to turning right (it's basically turning right twice) and our robot is capable of handling this situation. (see Figure 3 )


Figure 3. making a U turn.

If the robot reaches the end of a wall, namely if a wall appears in front of it, then it has to turn left. Although this seems like a special case, the robot can still turn to the left using the difference between front-right and back-right sonar readings. In this case the difference will be negative (because the front will be blocked and the front readings will be less than back readings) and this will tell the robot to turn left. Since our robot adjusts its forward speed depending on the front sonar readings it won't run into the wall. (see Figure 4 )


Figure 4. turning left.

What happens when the robot can't turn left or right and can't go forward? In the previous case, the robot could turn left and keep wall following, but if there's a wall on the left, then we have to be careful about not running into it. When the robot is stuck like this (no matter what the difference between the front-right and the back-right sonar readings is) it will rotate left until the front is clear and then start wall-following again. (see Figure 5 )


Figure 5. stuck robot at the end of a corridor.

3.6.Free-Space Following

This function allows the robot to navigate through hallways, mazes, and rooms by moving in the direction of the greatest open space, as determined by its sonar sensors. In our implementation, which underwent an ungodly number of tweaks until it was perfect, the trans velocity used was proportional to a weighted average of the filtered fused readings of the front three sensor pairs. The steer velocity was initially determined by a the direction of the sensor pair returning the largest (free-est) reading. This didn’t work very well: the robot would come too close to walls on its sides. To fix that we added more sensor readings in our estimation of steer - if the left sensors read closer than the right ones we would go right and vice versa. To implement a threshold value beyond which the side sensors would have no impact, we used a sigmoid function that would smoothly weigh down the reading to 0 as the reading increased past the threshold.

3.7.Header files and API

Each of our functions for the tasks described above uses functions that were previously defined for the Nomad and Magellan robots in the Nclient and Mage libraries. In order to make transition between simulation on a Nomad and testing on a Magellan simple, we wrote two header files that declared identical macros (e.g. UNI_VM()) but defined them with parallel functions according to the robot (e.g. scout_vm() for the Nomad and vm() for the Magellan). As such, switching between robots is accomplished by switching the header file in the main file being run. In addition, we wrote another header file that declares the functions for our basic motion tasks.

4. tests

4.1.Nomad Nserver Simulations:

The Nserver 2D simulator allowed us to evaluate each motion function’s performance and to determine appropriate relationships between the two proportionality constants and the control value limits for each task.

4.1.a.Rotation:

The function was tested for positive and negative input angles with desired orientations in each quadrant. The current orientation was printed so that the rotation proportionality constant could be increased until overshoot was observed. The getDesiredAngle() function was adjusted until very large input angles performed correctly as well.

4.1.b.Moving forward:

Positive and negative distances were tested.

4.1.c.Arbitrary:

We first tested pure rotation and x-direction translation cases. Then pure y-direction inputs and goal points in each of the quadrants were simulated, and finally, the robot was commanded to return to its beginning point by feeding it a series of opposite translation and rotation values. During these tests the algorithm from Seigwart, Nourbakhsh was modified until we redefined the θ values as those given in [1].

4.1.d. Waypoint Following

4.1.e.Wall-following:

We originally designed this function to control the right and left wheels individually rather than the translation and rotation speeds. After building a map in Nserver and running the simulation we realized that this method requires values of and relationships between many proportionality constants to be determined precisely. In order to achieve reliable performance we switched to the translation/rotation method.

The robot’s tendency to turn circles when a wall was not near enough to its right side prompted us to add a FIND_WALL case to the program. This allows the robot to travel forward until it reaches a wall. Then the simulation was run with the robot starting both near and far from a wall. In addition, we had the robot circle a block to test extended turning, and a maze was created to test navigation around corners and the ability to correct for traveling too near or far from the right wall.

4.1.e.Free-space:

We made a map that included a hallway, open space, and small obstacles and evaluated visually whether the robot traveled in the direction of the most open space. In addition, we dynamically added obstacles during simulation and were pleased to observe that the robot responded as expected.

We encountered some problems while optimizing this capability. Initially, Sam would move too jerkily, which we thought was due to a poor selection of proportional constants. However, adjusting those did not fix the problem. In the end, we tested our code on Frodo, and the jerkiness was gone, probably due to the better state of Frodo’s motors. A second problem arose when the robot would get too far to walls and not register obstacles adequately. To fix that, we fused and filtered the sensor pairs, as described already.

4.2 Magellan/API Testing

Transitioning from the simulator to the Magellan made us change all constants we used for our algorithms. The constants we started off with originally were from the book, when we moved to the robot, we had to adjust them empirically until we got a good balance between smooth motion and speed. To establish the dynamic constraints on the robot motion, we ran some trials, and came up with minimum and maximum rotational and translational velocity that resulted in acceptable motion.

5. what we learned

In addition to learning how to apply control theory to robot navigation, we discovered the quirks of Magellan robot odometry. While Frodo is able to travel in straight lines, Sam has a systematic error that causes him to veer to the left.
Also, Sam’s motors don’t accelerate smoothly, and changing velocities results in jerky motion. Frodo’s motors are much better. However, some of Frodo’s IRs don’t seem to work. It also has a dent in sonar 0, which causes it to act funny sometimes.

If the robot hangs and you can’t shut it off, you have to open it and unplug the battery. Also, the wireless is flaky, so restarting the local wireless card on the robot and/or the wireless router in the robot lab helps when it is stuck.

We learned to use GCM and implemented a multi-layer architecture.

The lab also gave us first-hand exposure to sonar bouncing off oblique walls. We dealt with this by averaging the values of a group of sensors.

Writing the functions served as a review and caused us to share and expand our C programming strategies and skills.