Dynamic Clustering Tutorial

From Robowiki
Jump to navigation Jump to search


"Dynamic Clustering" (DC) is a term specific to the Robocode world that combines a k-nearest neighbor algorithm with some kernel density estimation. The term, coined by ABC, is a reference to k-means clustering, but actually has a lot more in common with k-nearest neighbors. The core idea behind a DC system is that for each decision you make, you examine the current battle situation, compare it to a log of previous situations to find those that are most similar, then use the data collected from those previous situations to decide what to do. This allows your bot to adapt to how much data it has collected so far, as well as to change how it classifies that data on-the-fly, because it's always re-examining the original data.

If you're already familiar with k-nearest neighbors, k-means clustering, and/or kernel density, you already understand a lot of what's behind Dynamic Clustering. If not, don't worry...

Logging and identifying similar situations

Translating to data points

A data point is a location in an imaginary data space. It's not unlike a physical point in space, except that in code, we are not limited to three dimensions. The Robocode battlefield is a two-dimensional data space, filled with (x, y) coordinates which could represent a bot, a bullet, a wave, etc. We can calculate the distance between two bots using the Distance formula: <math>\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}</math>. This is called the Euclidean distance between the bots, and it can be expanded to be used in higher-dimensional data spaces.

For instance, in 5 dimensions, we'd use <math>\sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2+(p_1-p_2)^2+(q_1-q_2)^2}</math>. This is how we measure the similarity between two situations: by translating them to data points and then measuring the Euclidean distance between them. You can use as many dimensions as you want — 6 or more dimensions is not uncommon — but we'll stick to 3 for a moment, for the sake of simplicity and readability. Once you've got this stuff down, adding more dimensions is trivial.

To translate a situation into a data point, we start with some of the numbers that describe that situation. Let's take the speed, relative heading, and energy of the enemy bot. We want to normalize each value to cover the same range, between 0 and 1; if we didn't do this, attributes with large numbers would have more weight than attributes with small numbers. So to translate the current state of the enemy bot into a data point, we might have something like this in the onScannedRobot event:

double absoluteBearing = e.getBearingRadians() + getHeadingRadians();
double[] dataPoint = new double[3];
dataPoint[0] = Math.abs(e.getVelocity()) / 8;
dataPoint[1] = Utils.normalAbsoluteAngle(e.getHeadingRadians() - absoluteBearing)
    / (2 * Math.PI);
dataPoint[2] = e.getEnergy() / 100;

The relative heading should really be reversed if the enemy's going counter-clockwise, and energy is not really a useful attribute, but you get the idea. We can then add this data point to a collection that we will search through later.

Identifying similar situations

Once we have built up a log of past situations, we need to search the log for situations similar to the current one (i.e., the "nearest neighbors"). This can involve a lot of calculations, so it's important to optimize this code as much as possible. Using a kd-tree can greatly increase the speed of this process, especially in long battles, but that's outside the scope of this tutorial. Given our log of data points that describe past situations, a data point to describe the current situation, and how many situations we want to find, here is the pseudocode for a well optimized "brute force" algorithm for finding the situations most similar to the current one (originally derived from Chalk):

nearestNeighbors(allPoints, currentPoint, numNeighbors, numDimensions)
    define nearestPoints, collection of size <numNeighbors>
    fill nearestPoints with the first <numNeighbors> points from allPoints

    define nearestDistancesSq, double array of size <numNeighbors>
    set nearestDistancesSq[0] to distanceSquared(currentPoint, nearestPoints[0])
    define longestDistanceSq, initialize to nearestDistanceSq[0]
    define longestIndex, initialize to 0

    for x = 1 to numNeighbors-1
        nearestDistancesSq[x] = distanceSquared(currentPoint, nearestPoints[x]);
        if (nearestDistancesSq[x] > longestDistanceSq)
            longestDistanceSq = nearestDistancesSq[x]
            longestIndex = x

    for x = numNeighbors to allPoints.length-1
        thisDistanceSq = distanceSquared(currentPoint, allPoints[x]);
        if (thisDistanceSq < longestDistanceSq)
            nearestPoints[longestIndex] = allPoints[x]
            nearestDistanceSq[longestIndex] = thisDistanceSq
            find the new maximum value in nearestDistancesSq
                set longestDistanceSq to the maximum value
                set longestIndex to the array index of the maximum value

    return nearestPoints

Note that we do not actually calculate the square root here, as we would to get the real Euclidean distance. Square root calculations are very slow, and it would not affect which distances are longer than others.


As you can see, the execution speed of the above algorithm will scale linearly with the size of allPoints and the number of dimensions in the data space. If your bot adds a new data point each tick, it will have roughly 25,000 data points in a 35-round match. Even with only 3 dimensions, that would mean over 100,000 basic math operations just to find the similar situations, and you're doing that every time you want to aim. So you will eventually need to start deleting old items from your log when you reach a certain number (maybe 30,000), or you could replace this method with a kd-tree solution.

Analyzing the similar situations

So we've looked through our log and found a bunch of situations very similar to our current situation. What next? Well, there are many things to consider before answering that question. We might be aiming at the enemy, or we might be trying to dodge his bullets. We might use GuessFactors to determine our firing angles, or we might use a Play-It Forward simulation to see how the enemy moved in those previous situations. Once we have the firing angles, there are also a variety of kernel density estimators to choose from to decide which angle is the best.

Storing firing angles

First, when logging data points, we need to also log some additional information about that situation that will let us generate a firing angle. This additional information could be a GuessFactor, a bearing offset, a displacement vector, or a log of the enemy's movements immediately following that point in time (for Play-It Forward simulation). Since it probably gives the most bang for the buck, we will use GuessFactors here.

Hopefully, you already understand how Waves and GuessFactors work; if not, go read up on those pages. All we need to do add to our wave collection is:

  • Translate the firing situation into a data point and add the data point to our wave.
  • When the wave breaks (in a gun) or if we're hit by a bullet (in a surfing movement), add an entry to a HashMap, associating the data point with the GuessFactor that this wave produced.
  • Later, when we've found our group of similar situations (the nearest neighbor data points), we can look up the GuessFactor that was produced in each situation and easily translate it into a firing angle.
  • Important note: Make sure you use the same object to insert into your collection of data points and the HashMap. If you generate the data point twice, the points you pull from your search will not work as keys into the hash map. If you're not using a kd-tree, you don't even really need a separate collection — you can just use the .keySet() from the HashMap.

Making decisions

Once we have found our group of similar situations, we can look up or generate the firing angle attached to each one of them. In our example, this would mean fetching the GuessFactor from a HashMap (keyed off the situation's data point), calculating the Maximum Escape Angle (MEA) for the bullet power we choose (gun) or detect (surfing), multiplying the GuessFactor by the MEA, and accounting for orbit direction.

What we do with these firing angles depends on if we're aiming or surfing. If we're aiming, we want to choose the firing angle that would have hit the enemy bot in the most of these previous situations. If we're surfing, we are instead measuring the danger of the firing angles we'd reach for each of our movement options, based on how close they are to each of the similar situations' firing angles.

Aiming example

Let's say we're aiming, we've found our similar situations, we've decided on a bullet power, and the enemy is distanceToEnemy away from us on the battlefield. The pseudocode for determining a firing angle might look like this:

firingAngle(similarPoints, guessFactors, bulletPower, distanceToEnemy)
    define bestAngle
    define bestDensity, initialize to 0
    define botWidthAngle, set to abs(36/distanceToEnemy)
    for each dataPointA in similarPoints
        define density, initialize to 0
        define firingAngleA, set to
            firingAngleFromGuessFactor(guessFactors{dataPointA}, bulletPower)

        for each dataPointB in similarPoints
            if (dataPointA != dataPointB)
                define firingAngleB, set to
                    firingAngleFromGuessFactor(guessFactors{dataPointB}, bulletPower)

                define ux = (firingAngleA - firingAngleB) / botWidthAngle
                if (abs(ux) <= 1)

        if (density > bestDensity)
            bestAngle = firingAngleA
            bestDensity = density

    return bestAngle

First, we calculate the angle that would cover a bot width at this distance, which, after a bit of circle math, is about (36/distance) in radians. For each of the possible firing angles, we pick the one that is within a bot width of the most other angles. Pretty simple! There are lots of other kernel density estimators; a good list is here. The above code would be "Uniform" with a "bandwidth" of one bot width. DCBot uses Uniform, too, but with half a bot width. Chalk and Diamond use Gaussian density estimators. Ali, instead of choosing from the firing angles in the similar situations (the outer loop), instead just tests a bunch of angles at regular intervals.

The firing angle returned would be relative to the angle directly towards the enemy (Head-On Targeting), and would need to be adjusted for the enemy's direction as well (multiplied by -1 if he's going counter-clockwise).

Wave Surfing example

Now, suppose we're Wave Surfing. This means we're choosing among several different movement options each tick (e.g., orbit clockwise / stop / orbit counter-clockwise). What we need in this case is to measure the danger of the (enemy perspective) firing angle we would reach if we were to choose each movement option, so we can choose the safest one.

You could calculate the kernel density for each firing angle similarly to how we did above for aiming (the inner loop), but there are two problems with that. First, we would often have lots of angles with the same danger, so we would likely stay right on the edge of areas considered dangerous. It's much better to stay as far away as possible from the dangerous areas, so we want a smoother danger graph. A Gaussian density estimator works well for this. Also, since surfing data is so much more sparse than targeting data, the similar situations you find will not always be that similar, so weighting each situation by its distance to the one being surfed helps a lot.

checkDanger(similarPoints, surfWaveDataPoint, guessFactors, bulletPower,
    projectedDistanceToEnemy, projectedFiringAngle)
    define density, initialize to 0
    define botWidthAngle, set to abs(36/projectedDistanceToEnemy)
    for each dataPointA in similarPoints
        define firingAngleA, set to
            firingAngleFromGuessFactor(guessFactors{dataPointA}, bulletPower)

        define ux = (firingAngleA - projectedFiringAngle) / botWidthAngle
        density += e^(-0.5 * (ux^2))
            / distance(dataPointA, surfWaveDataPoint)

    return density

The Gaussian density calculation may seem like voodoo, but let's look at some examples. For two angles that are exactly the same, ux will be zero, and <math>e^0 = 1</math>; at a difference of one bandwidth, <math>e^{-0.5} \approx 0.61</math>; at two bandwidths, <math>e^{-2} \approx 0.14</math>, and from there it continues quickly towards zero. Basically, within the bandwidth (in our case, bot width), there is a high amount of danger, but after that it falls off fairly quickly.

Two other things to keep in mind in surfing:

  • You want to surf the situation / data point from when the wave was fired, not the "current" situation.
  • You should not calculate the nearest neighbors every tick; you can cache them for each wave after calculating them once. If you get hit or detect a bullet collision, clear the cache and refresh the nearest neighbors on all your waves.


So, assuming some prerequisite knowledge of managing Waves and working with GuessFactors, that's how Dynamic Clustering works. If you want to examine some good open source DC bots to learn more nitty gritty details, DCBot, Chalk, and Diamond should all serve you pretty well. They each do things a bit differently, which should also help in offering you a wider view of what you can do with Dynamic Clustering.