E28 Mobile Robotics: Lab 3
robot localization and mapping
in partnership with david benitez, maila sepri and yavor georgiev
04.19.2005

labs home    |    robot control    |    robot localization and mapping


1. abstract

For this lab we created a localization system that uses sonar and IR data and runs it through a particle (Bayesian) filter to deduce the current robot position based on a pre-created map of the environment, while the robot is freespace following. Initially we created the filter in the simulator, and it would localize and track successfully. Transitioning to the robot, however, was difficult and our final real-world implementation was far from perfect, due to the poor quality of the sonar and IR data. To tackle this problem, we performed an extensive sensor calibration, and also, we pre-filtered the incoming date to eliminate any specular bounces and other sensor artifacts. In the end, our implementation was able to track the robot continuously, with intermittent inaccuracies and slight noise.

2. task overview

For a task description, visit the lab page here.

The first task in the lab was to create a simulated environment where we could test the filter. To do this, we started in the Nserver simulator, where we created a test setup by adding obstacles to the map. It was important to saturate the environment with small uniquely-places obstacles to prevent symmetries in the setup, which would confuse the particle filter. Then we proceeded to code up a particle filter based on sample code provided on the lab website.

The particle filter is a probabilistic localization mechanism, which allows multiple state hypotheses to be propagated and evaluated based on the incoming sensory data. It takes sensory measurements at its current real-world location, and uses the pre-created Nserver map of the environment to match each one of its hypotheses to the current sensor reading configuration and select out the likeliest hypotheses. Alongside this, the robot would also update its current location based on a motion model dependent only on the presumed location change resulting from the motor commands issued (dead reckoning is a model that uses this step, but not the probabilistic sensory update). The benefit of a particle filter over other localization techniques is that it is able to recover from global mispredictions since it maintains multiple hypotheses, and that it just stored a set of samples to represent the probability distributions involved, not the distributions themselves.

After the particle filter successfully localized in the simulator, we moved the code over to the Magellan robot and adapted it to the Mage library. We discovered, however, that the simulator sensor model was far from adequate, and the actual sensor data we were receiving was much too noisy for the filter to work in its original configuration. To deal with this issue we took two major steps (to be considered as extensions):

  • Fused and Median-filtered sensors: In order to make sure the robot is getting adequate sensor information both at close and in the far range, we fused the data coming in from the IR and sonar sensors into one continuous distance scale, and ran the readings through a median filter to clean out sporadic noise (which was especially relevant to the IRs).
  • Sensor calibration: In order to use sonar readings to match predicted and actual sensor configurations, we needed to make sure the sensor readings return accurate physical distances both in the IR and sonar range.
  • Pre-processing of sensor data: Sonars are susceptible to specular bounces, which cause them to return max readings. To deal with this we attempted two methods: biased median squared error, and omission of bad readings.

3. theory/equations and algorithms

Our particle filter has four stages: initialization, motion update, sensory update, and resampling, and the last three are executed over and over again to continuously localize the robot. The following parameters were adjusted for the best performance: number of particles, noise in the translational component of velocity, noise in the rotational component of velocity, noise in the sonar measurements. The base units we used for our filter were inches and degrees.

During the initialization stage, all particles are initialized to the starting position of the robot in world coordinates, and all particle weights are set equal.

At every timestep, our algorithm would run a freespace following algorithm (borrowed in part from Dr. Maxwell’s code), which would generate a translational and rotational velocity for the robot, so it would freespace follow in the environment, without crashing into things. These trans and steer values would be fed to the vm() function, which made the robot move physically, and also to the second step of the algorithm: the motion update. The motion update would take each particle and use a simple motion model to predict how far that particle would have moved given the trans and steer commands issued.

x(t) = xo + dT * trans * cos(theta + steer/2)

y(t) = yo + dT * trans * sin(theta + steer/2)

theta(t) = thetao + dT * steer

With this model, the new {x, y, theta} of each particle can be estimated, based on the {xo, yo, thetao} location of the particle in the previous iteration, and the trans and steer parameters passed to the controller at this iteration. This is basically dead-reckoning for each particle, where you assume if you tell the motors of the robot to go forward at 1m/s for one second, the robot would actually move a meter. However, this is never the case (due to imperfect odometry, etc.), so this motion update step is just an approximate guess for where each particle would end up. At this step, it is important to add some noise to the trans and steer parameters to make sure the new generation of particles has some added spread over the old generation, so the new particles can hopefully account for global mispredictions and help the robot recover.

The noise model for this step looks like this

trans += trans * gaussian(0, errortrans, constant) + gaussian(0, errortrans, constant)
steer += steer * gaussian(0, errorsteer, constant) + gaussian(0, errorsteer, constant)

Thus, we could add both a constant noise term or a term that was based on a percentage of the current value of trans or steer. Please note the use of random sampling from a gaussuan distribution gaussian(μ, σ2), where the mean μ = 0, and the standard deviation σ is set to a percentage value or a constant.

The third step, the sensory update, assigns a weight to each particle according to how likely that particle is to hold the true robot coordinates, based on the sensor readings coming in. Due to the random sampling in the motion update stage, we would end up with a cluster of particles, and in this step we figure out which one resembles the current physical location of the robot the closest. To do this for each particle we need to come up with a set of predicted sonar readings (the set of the readings of the 16 sonars if the robot was at the location denoted by the particle), and compare that to the actual sensor readings the robot is getting at its current location in the real world. To get the predictions we used Dr. Maxwell’s matchSonarReadings() function from kalmanLocalize.c. The closer the predicted readings are to the actual, the likelier that the particle is correct about where the robot is situated. To come up with the predicted set of readings we use a pre-constructed up-to-scale Nserver map of the environment the robot is in. For each sensor we use some trig to get the distance to the closest wall. Ideally, we compute sum squared error (Σ(predicted - actual)2) between the predicted and the actual set and assign each particle a weight so that

weight = 1 / exp(Σ(predicted - actual)2 / error sensor2)

error sensor is determined empirically. The exponential allows us to get a greater span between highly probable and highly improbable particles, thus accelerating the selection process.

This step was the most difficult to implement on the real robot, although it worked easily in the simulator. The trouble was that the incoming sensory data was far too noisy and inaccurate for our actual values to match the predictions, which were done geometrically in the simulator and were reasonably accurate. As a result, the actual readings would be noisy enough for all particles to appear to have similar errors, which made our resampling basically useless. To deal with this problem, we tried a few things

  • Fused and filtered sensors: we fused IR and sonar data to get reliable reading at close and long range. Then we ran the incoming readings through a temporal median filter to stabilize the reading.
  • Sensor calibration: since the predicted readings from the simulator were highly accurate, we wanted to get good distance readings from both IRs and sonars, to make sure we minimize any error added due to improper calibration. We took a series of measurements, with a step size of 2in. at close range, and then gradually increasing to 5in., and then 10in. as we got well into the range of the sonars, which resulted in the following calibration curve:

  • Figure 1. Calibration curve for combined sensors.

    There is an abrupt jump in the curve at the threshold value, where we switch from using IR to using sonar readings. Then we fitted curves through the IR and sonar graphs independently:

    Figure 2 .
    Calibration curve for sonars


    Figure 3 .
    Calibration curve for IR sensors.
    Note the exponential error mode of the IRs and the linear error mode of the sonars. Using this curve, we got better accuracy on our sensor readings.
  • Median Squared Error (MSE): even after calibration, our sensor data was a pretty miserable representation of the environment, largely because of specular bounces in the sonars. Initially we were using SSE as shown above, and specularities would insert random max range readings in our sensor array, and so for any particle this error would dwarf any error generated by differences in predicted and actual, which is what we really need for the particle filter to work. To deal with this, we moved to using MSE, for which we would calculate (predicted - actual) 2 for each sensor and then store the 16 results in an array. Our final error estimate would be the median of that array. This method was a little more immune to specularities, especially when we introduced a bias to return readings on the bigger side of the median in the array, but the results were still not good enough.
  • Discarding specularities: in a final attempt to deal with specularities, we decided to go back to SSE, but to entirely discard max range readings (specular bounces) from our sensor array for a given particle. After looking at the results, we realized that out of 16 sensor readings we were getting for each particle, only about 5-6 would return non-specular valid readings, so it is obvious that with this step we removed 66% of added noise in our particle weight estimation. Although we still wish we could have used more sensors at a time for more reliable results, this gave us good enough data to work with.

The final step in the process is resampling, where a linear probabilistic resampling method is used to create a new generation of particles, where the highest probability particles would be represented many more times than the lower probability particles. The resampling process is accomplished by stacking up all particles with respect to their weights and then using a regular offset with wraparound (started at a random point in the weight space) to select the new particles. This way the highest probability particles, which take up most of the room in weight space would be selected multiple times and would be represented strongly in the new generation of particles.

The last three steps are then repeated at a given frequency, with the highest probability particle every time through the loop being our best guess of the robot’s physical location.

4. results

After only minor modifications, our code worked in the simulator. The particle filter was able to track the robot with reasonable accuracy, for long periods of time, around complex spaces. The following video shows tracking around a test map, with the first segment showing the first few seconds, and the second segment showing the robot motion after a significant amount of time.


first few seconds



particle filter after some time

After converting all functions for use on the robot, and making all adjustments described in the previous section, we were finally able to get successful tracking in a small constructed space.


particle filter tracks down a real robot as the robot wanders around.


Figure 4. The path of the robot in the video. The pink watermark shows the real-world obstacles.

Tracking worked with reasonable accuracy for runs of up to a minute. Occasionally the best particles would get diverted away from the correct location (due to symmetries and bifurcation behaviors), but the particle filter was able to recover, which is what we expected, since global recovery is one of the reasons to do a particle filter over a Kalman filter anyway.

5. what we learned

  • particle filtering with sonars+IR the way we did it isn’t reliable enough for practical applications
  • the mechanics of a particle filter: how each step works, and how to make each step better: linear probabilistic particle selection procedure, using an exponential for the weighing process, etc.
  • what’s NOT a particle filter in terms of thresholding, etc.
  • sensor signal needs significant processing before it can be used for any distance-based algorithm: specularities pose a significant problem, even though we know about them
  • a possible way to extend this lab would be to improve the sonar model Dr. Maxwell wrote. The main problem was that our sensor matching function couldn’t tell when a specularity would occur; it seems possible to estimate the real-world critical angle and use trig in the matching function to predict specular bounces. We didn’t have enough time to try and implement this, but it seems entirely possible.