## CS 436 Fall 2003 - Program Implementation Notes

The program was to attempt to find the maximum value of a user-provided function of three variables, given a maximum time constraint.

I implemented two different algorithms for the heuristic search:

• Hill climbing with random restarts
• Simulated annealing
I started with the hill climbing algorithm, because it is a relatively simple algorithm to implement. It has two primary weaknesses - the determination of the successors of a point in continuous space, and its tendency to get trapped in local maxima of the function.

To resolve the first issue, for a point I generated points on a cube centered on the point - each of the 26 points that differed from the center by +/- a delta in X, Y, and Z. I initially started each delta at one twentieth of the range of the corresponding coordinate. Any time a point was determined to have a better function value than any of the neighbors, I cut each delta in half and re-evaluated. After a point was the best for five iterations, I considered it a local maximum.

To resolve the second issue, after finding each maximum, I randomly chose a new point and started again. To more thoroughly cover the search space, instead of choosing randomly from anywhere throughout the space, I subdivided it into 64 sub-spaces by quartering each axis. Each starting point was randomly chosen from within the next sub-space; after 64 restarts, I started over again with the first sub-space.

This seemed to perform very well. For all but the most computationally intensive functions (and therefore the slowest to evaluate), I was able to make several passes through all of the sub-spaces within five seconds. The algorithm was able to find the global maximum correctly for nine different search spaces.

One problem that could still be fixed is that when I find the best neighbor of a point, instead of recentering on that point (which would result in recalcu- lating values for a number of the same coordinates the next time through the hill climbing loop), I go twice as far. If I just centered the cube on the best neighbor, I would recalculate somewhere between 4 and 18 points each time; by making the cubes touch on only one side or edge or point, I reduce this to between 1 and 9 points. I attempted to increase the distance by 2.5 times as far, but I had problems getting this working; if the point fell off a cliff and I had to recover, my implementation had problems resynching. I think this could be fixed with a little more work, but I ran out of time.

The simulated annealing algorithm turned out to actually be more straight- forward to implement in continuous space, but it provides a number of new challenges. I ran into problems with floating point underflow that I managed to resolve. But there are still technical issues with determining a good temperature schedule and adjusting the parameters of the algorithm. I found that my implementation did not do as well as the hill climbing one, so the main function chooses hill climbing by default.

After a conversation with John, I modified my simulated annealing code so that as it decreases the temperature, it also decrease the distance from a point that it will consider for the new point. This turned out to work extremely well - the results of the simulated annealing runs were very comparable to the random restart hill climbs.

Mail me at: bwall@cs.montana.edu

``` Last modified: Dec. 10, 2003 ```