# Explore Simple Game Algorithms with Color Walk: Part 12

We've now been exploring and discussing game algorithms using the simple game Color Walk for months over the course of 11 posts. We started out extremely simple with random and round-robin algorithms, advanced to some obvious greedy algorithms, and wound up discussing a number of graph algorithms. We've discovered a ton of stuff along the way, so it would be nice to step back and review the ground we've covered to see the big picture in all of the experimentation and details of the various algorithms we found along the way.

The algorithms started out very simple so that some of the tooling structure could be built for running and evaluating the different algorithms. It would have been difficult to debug the batch run code and a complicated algorithm at the same time, so we gradually ramped up the difficulty of the algorithms after making sure all of the tooling was in place to evaluate the different algorithms.

The simplest algorithm possible is purely cycling through the colors for each successive move without putting any thought into what would be a good color choice. Predictably, this algorithm does not perform well at all, but that's not the point. It's an easy algorithm to get right when building up the tooling around the game, and it provides a baseline measurement of performance that any algorithm worth its salt should be able to beat.

Once we have an algorithm in place, it's natural to immediately start thinking about how to improve it. One obvious thing to notice about the round-robin algorithm is that sometimes at the beginning or end of the game it picks a color that doesn't remove any blocks. We can fix this imperfection. Instead of blindly picking the next color in the rotation every time, it's easy to check if the next color will remove any blocks, and if it doesn't, to skip that color and move on to the next color in the rotation. This optimization makes an incremental improvement to the basic round-robin algorithm, and it's an idea we can use throughout our algorithm development.

Round-robin is not the only choice for a basic algorithm. Another option for a baseline algorithm is to randomly choose the next color with no thought whatsoever. This strategy may seem useless, but in some cases it's very hard to beat a random algorithm. It's always worth a shot to try it out, and then you at least have a simple baseline measurement of what you want your algorithm to beat. If you can't do any better than random, then you might as well use the random choice with its fast execution. Obviously, in this case random is not a good choice.

As with round-robin, we can employ the strategy of avoiding moves that don't remove blocks with random choice. We simply keep picking another random color until we chance upon one that removes blocks before committing to the move. This strategy is a better way to implement random choice, as it's more along the lines of make a random move, but not a stupid move. With this type of random choice, saying that an algorithm under consideration does no better than random will actually mean something.

The first "real" algorithm that actually does calculations to attempt to determine the next best move also turns out to be a fairly powerful algorithm. The greedy algorithm looks at how many blocks are removed with each color, and then picks the color that removes the most blocks. This strategy significantly improves on the basic algorithms in every way while still being quite fast because it's light on computation. The greedy strategy is great when it works because it's easy to understand, easy to implement, and it tends to execute fast because it only considers immediately available information.

The GLA algorithm is an extension of the greedy algorithm. Instead of picking the move that removes the most blocks, why not look ahead and pick the move that will open up the board to remove the most blocks on the next move? This algorithm gets more complicated to implement because now we need to track which move has removed which blocks to figure out how the most blocks can be removed in a two-move sequence. Also, this algorithm is no longer technically a greedy algorithm because by definition a greedy algorithm always chooses the most of something with immediately available information, and GLA is incorporating some foresight. However, since it's an extension of a greedy algorithm and still tries to maximize block removal, we'll refer to it in the same terms. It does manage to improve upon the base algorithm, too.

Once we've looked ahead one extra move, there's nothing stopping us from looking ahead more moves, except for the fact that as we look more moves ahead, the exponential explosion of moves to be considered will overwhelm us. In a sense, the GLA-N algorithm, with 'N' denoting how many moves we're looking ahead, is just a generalization of the last two algorithms, with GLA-1 being equivalent to the greedy algorithm that only looks at the next move, and GLA-2 being equivalent to GLA that looks at the next two moves. GLA-3 improves upon the previous algorithms, but then the performance plateaus while taking much longer to run for each game.

It turns out that the strategy of looking for the move that removes the most blocks is only one of many related strategies that can be plugged into the same basic algorithm structure. The way of deciding which move is the best is called a heuristic, and we have other heuristics to choose from for that purpose. Instead of choosing the move that removes the most blocks, which maximizes the cleared area, we can choose the move that maximizes the perimeter between the cleared space and the leftover blocks. This new algorithm has the same structure as the greedy algorithm, but with a different heuristic for deciding on the optimal color choice. This heuristic manages to outperform the base greedy algorithm.

Since we're plugging in a different heuristic for the greedy algorithm, we can also do the same thing for the GLA algorithm, and look ahead some number of moves before choosing the move that will maximize the perimeter in the future. The structure of the algorithm was done in a way that made it easy to swap in any heuristic that we could think of when assessing how to make the next move, so evaluating it for looking ahead was easy. We found that the max perimeter heuristic didn't improve as much here, and actually started to backtrack in performance when it looked too far ahead.

With two heuristics, we can think about ways of combining them to make a different determination about what the next move is. One way of combining heuristics is to run one for some number of moves first, and then finish off the game with the other one. When we run the max perimeter heuristic for 20 moves and then run the max area heuristic, we get a different performance profile than either one on its own. Unfortunately, it's not better.

Even though the first hybrid attempt wasn't better than the perimeter or area heuristics on their own, we still want to see how it does when looking ahead. We see that here it does improve on the max perimeter heuristic a little—likely because maximizing the perimeter at the end of the game doesn't help as much as maximizing the number of blocks removed—but it can't surpass the powerful GLA algorithm.

Another idea for solving a board efficiently is to quickly clear a path to the middle of the board before spreading out and clearing away more blocks. While this exact strategy would be complicated to implement, we can approximate it by maximizing the perimeter-to-area ratio. If the perimeter of the cleared section is large and the area is small, that would be equivalent to having a deep path cleared into the board. Testing out a pure deep-path heuristic showed some pretty disappointing results, but that makes sense because such a strategy should perform poorly in the second half of the game, even with look-ahead.

Combining the deep-path heuristic with a max area heuristic is what we really want to see with this strategy, and we do see significant improvements compared to a pure deep-path heuristic. However, this approach still does not best the GLA algorithm. While it may be easier for a human to play with this strategy, it's not the best one when you have the raw computational power of a computer playing the game.

When you picture the color choices available for each sequential move in the game, the moves can easily be drawn as a graph with start and end vertices. A path from start to end corresponds to a set of moves that makes up a game, and there are a number of algorithms available for searching this graph. One basic algorithm is BFS, which searches each vertex that's one edge away, then two edges away, and so on until the end vertex is found. Because the graph is unmanageably large, we need to give BFS a head start with GLA-5 for the first moves. With this hybrid algorithm, we actually best GLA-5 by itself, on average, and we have a new best algorithm to beat.

Instead of searching the move graph in order of how far away the moves are, we can follow each path of moves to its end before backing up and going down the next path. This strategy is depth-first instead of breadth-first, and we need to be careful to limit the search so that it doesn't take forever on this type of graph. We can limit the search to only follow paths that remove blocks on every move, we can stop the search on a particular path if we've already found one that's shorter, and we can run GLA-5 first to shrink the graph that we end up searching with DFS. With these limitations we get pretty good performance, although not quite as good as BFS or GLA.

While both BFS and DFS would search the entire graph if we didn't limit them in some way, it is possible to optimize the search to try to find a nearly shortest path more quickly. Dijkstra's algorithm does just that by assigning a value to each vertex in the graph that it comes to and putting those vertices in a priority queue. Then it will pull the maximum-valued vertex out of the queue, assess the children of that vertex, and put them in the queue. The algorithm continues until it finds the end vertex, and it does pretty well against the competition. Dijkstra's algorithm is especially consistent at finding a good solution, with a low standard deviation between games.

We can pair Dijkstra's algorithm with other algorithms to make new hybrids and try to improve performance further. Pairing it with the powerful GLA algorithm ends up besting every other algorithm we've explored. The ultimate strategy seems to be to clear about half the board with moves that maximize the number of blocks removed, then carefully select the optimal path to the end of the game with Dijkstra's algorithm. We could further optimize this algorithm by tweaking the switch-over point between the two sub-algorithms.

Another idea was to try switching the order of GLA and Dijkstra's algorithm, and seeing how that performed. Alas, it did not do as well, so it appears that Dijkstra's algorithm really shines at optimizing moves at the end of the game while GLA is best at clearing the way at the beginning of the game.

For one last attempt at another hybrid algorithm, we looked at combining the max perimeter heuristic with Dijkstra's algorithm. This strategy makes some sense by trying to increase the possibilities of clearing blocks in the future by maximizing the perimeter of blocks adjacent to the empty zone, and then searching for that optimal path to the end. It ended up working pretty well, doing better than GLA and BFS in the worst case, but falling slightly short on boards that had potential for less moves.

We've summarized 20 or more algorithms here, depending on how you count them, and that's far from a complete list of possibilities. Other combinations of these algorithms could be explored, we could come up with other heuristics to help decide on move values, and we could mix-and-match more algorithms in complex and intricate ways with varying parameters to try to find that one perfect move sequence for each game. The solution space is incredibly large.

Exploring all of these different algorithms teaches us a lot about what the solution space looks like, and how the problem (the Color Walk game in this case) behaves when attacked in different ways. We've seen that some foresight is important, making good assessments of the progress in the game on any given move is quite helpful, and combining strengths of different algorithms works well. And we can do much better than random or a simple round-robin approach!

We also have developed an understanding of a number of classic algorithms and learned some good problem-solving strategies. The articles behind all of these algorithm summaries go into much more depth and detail on each of them, and they show how to develop an algorithm within the constraints of a specific problem. You can also look at the complete code here, and experiment with it for yourself. Have some fun with it, and see if you can beat the GLA-Dijkstra Hybrid algorithm.

Part 1: Introduction & Setup

Part 2: Tooling & Round-Robin

Part 3: Random & Skipping

Part 4: The Greedy Algorithm

Part 5: Greedy Look Ahead

Part 6: Heuristics & Hybrids

Part 7: Breadth-First Search

Part 8: Depth-First Search

Part 9: Dijkstra's Algorithm

Part 10: Dijkstra's Hybrids

Part 11: Priority Queues

Part 12: Summary

#### Simple Beginnings

The algorithms started out very simple so that some of the tooling structure could be built for running and evaluating the different algorithms. It would have been difficult to debug the batch run code and a complicated algorithm at the same time, so we gradually ramped up the difficulty of the algorithms after making sure all of the tooling was in place to evaluate the different algorithms.

**Round-Robin**The simplest algorithm possible is purely cycling through the colors for each successive move without putting any thought into what would be a good color choice. Predictably, this algorithm does not perform well at all, but that's not the point. It's an easy algorithm to get right when building up the tooling around the game, and it provides a baseline measurement of performance that any algorithm worth its salt should be able to beat.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Round Robin | 37 | 48.3 | 62 | 4.5 |

**Round-Robin with Skipping**Once we have an algorithm in place, it's natural to immediately start thinking about how to improve it. One obvious thing to notice about the round-robin algorithm is that sometimes at the beginning or end of the game it picks a color that doesn't remove any blocks. We can fix this imperfection. Instead of blindly picking the next color in the rotation every time, it's easy to check if the next color will remove any blocks, and if it doesn't, to skip that color and move on to the next color in the rotation. This optimization makes an incremental improvement to the basic round-robin algorithm, and it's an idea we can use throughout our algorithm development.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

RR with Skipping | 37 | 46.9 | 59 | 4.1 |

**Random Choice**Round-robin is not the only choice for a basic algorithm. Another option for a baseline algorithm is to randomly choose the next color with no thought whatsoever. This strategy may seem useless, but in some cases it's very hard to beat a random algorithm. It's always worth a shot to try it out, and then you at least have a simple baseline measurement of what you want your algorithm to beat. If you can't do any better than random, then you might as well use the random choice with its fast execution. Obviously, in this case random is not a good choice.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Random Choice | 60 | 80.2 | 115 | 10.5 |

**Random Choice with Skipping**As with round-robin, we can employ the strategy of avoiding moves that don't remove blocks with random choice. We simply keep picking another random color until we chance upon one that removes blocks before committing to the move. This strategy is a better way to implement random choice, as it's more along the lines of make a random move, but not a stupid move. With this type of random choice, saying that an algorithm under consideration does no better than random will actually mean something.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Random with Skipping | 43 | 53.1 | 64 | 4.5 |

#### The Algorithm to Beat

**The Greedy Algorithm**The first "real" algorithm that actually does calculations to attempt to determine the next best move also turns out to be a fairly powerful algorithm. The greedy algorithm looks at how many blocks are removed with each color, and then picks the color that removes the most blocks. This strategy significantly improves on the basic algorithms in every way while still being quite fast because it's light on computation. The greedy strategy is great when it works because it's easy to understand, easy to implement, and it tends to execute fast because it only considers immediately available information.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Greedy | 31 | 39.8 | 48 | 3.5 |

**Greedy Look Ahead (GLA)**The GLA algorithm is an extension of the greedy algorithm. Instead of picking the move that removes the most blocks, why not look ahead and pick the move that will open up the board to remove the most blocks on the next move? This algorithm gets more complicated to implement because now we need to track which move has removed which blocks to figure out how the most blocks can be removed in a two-move sequence. Also, this algorithm is no longer technically a greedy algorithm because by definition a greedy algorithm always chooses the most of something with immediately available information, and GLA is incorporating some foresight. However, since it's an extension of a greedy algorithm and still tries to maximize block removal, we'll refer to it in the same terms. It does manage to improve upon the base algorithm, too.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Greedy Look Ahead | 28 | 37.0 | 45 | 3.1 |

**Greedy Look Ahead-N (GLA-N)**Once we've looked ahead one extra move, there's nothing stopping us from looking ahead more moves, except for the fact that as we look more moves ahead, the exponential explosion of moves to be considered will overwhelm us. In a sense, the GLA-N algorithm, with 'N' denoting how many moves we're looking ahead, is just a generalization of the last two algorithms, with GLA-1 being equivalent to the greedy algorithm that only looks at the next move, and GLA-2 being equivalent to GLA that looks at the next two moves. GLA-3 improves upon the previous algorithms, but then the performance plateaus while taking much longer to run for each game.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Greedy Look-Ahead-3 | 25 | 34.2 | 40 | 2.7 |

Greedy Look-Ahead-4 | 25 | 33.3 | 39 | 2.6 |

Greedy Look-Ahead-5 | 25 | 33.1 | 41 | 2.8 |

#### Plugging in Heuristics

**Max Perimeter**It turns out that the strategy of looking for the move that removes the most blocks is only one of many related strategies that can be plugged into the same basic algorithm structure. The way of deciding which move is the best is called a heuristic, and we have other heuristics to choose from for that purpose. Instead of choosing the move that removes the most blocks, which maximizes the cleared area, we can choose the move that maximizes the perimeter between the cleared space and the leftover blocks. This new algorithm has the same structure as the greedy algorithm, but with a different heuristic for deciding on the optimal color choice. This heuristic manages to outperform the base greedy algorithm.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Max Perimeter | 29 | 37.4 | 44 | 3.2 |

**Max Perimeter Look Ahead**Since we're plugging in a different heuristic for the greedy algorithm, we can also do the same thing for the GLA algorithm, and look ahead some number of moves before choosing the move that will maximize the perimeter in the future. The structure of the algorithm was done in a way that made it easy to swap in any heuristic that we could think of when assessing how to make the next move, so evaluating it for looking ahead was easy. We found that the max perimeter heuristic didn't improve as much here, and actually started to backtrack in performance when it looked too far ahead.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Max Perimeter Look-Ahead-2 | 27 | 35.0 | 44 | 2.8 |

Max Perimeter Look-Ahead-3 | 27 | 35.0 | 41 | 2.9 |

Max Perimeter Look-Ahead-4 | 26 | 34.8 | 43 | 3.3 |

Max Perimeter Look-Ahead-5 | 28 | 34.9 | 46 | 2.9 |

**Perimeter-Area Hybrid**With two heuristics, we can think about ways of combining them to make a different determination about what the next move is. One way of combining heuristics is to run one for some number of moves first, and then finish off the game with the other one. When we run the max perimeter heuristic for 20 moves and then run the max area heuristic, we get a different performance profile than either one on its own. Unfortunately, it's not better.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Perimeter-Area Hybrid | 31 | 39.0 | 49 | 3.8 |

**Perimeter-Area Hybrid Look Ahead**Even though the first hybrid attempt wasn't better than the perimeter or area heuristics on their own, we still want to see how it does when looking ahead. We see that here it does improve on the max perimeter heuristic a little—likely because maximizing the perimeter at the end of the game doesn't help as much as maximizing the number of blocks removed—but it can't surpass the powerful GLA algorithm.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Perim-Area Hybrid Look-Ahead-2 | 27 | 35.2 | 43 | 3.2 |

Perim-Area Hybrid Look-Ahead-3 | 27 | 33.5 | 41 | 2.7 |

Perim-Area Hybrid Look-Ahead-4 | 26 | 33.2 | 41 | 3.1 |

Perim-Area Hybrid Look-Ahead-5 | 28 | 33.0 | 41 | 2.5 |

**Deep-Path (and Look Ahead)**Another idea for solving a board efficiently is to quickly clear a path to the middle of the board before spreading out and clearing away more blocks. While this exact strategy would be complicated to implement, we can approximate it by maximizing the perimeter-to-area ratio. If the perimeter of the cleared section is large and the area is small, that would be equivalent to having a deep path cleared into the board. Testing out a pure deep-path heuristic showed some pretty disappointing results, but that makes sense because such a strategy should perform poorly in the second half of the game, even with look-ahead.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Deep-Path | 51 | 74.8 | 104 | 9.4 |

Deep-Path Look-Ahead-2 | 50 | 74.9 | 112 | 9.8 |

Deep-Path Look-Ahead-3 | 50 | 75.2 | 112 | 9.8 |

Deep-Path Look-Ahead-4 | 50 | 75.4 | 112 | 9.5 |

**Path-Area Hybrid (and Look Ahead)**Combining the deep-path heuristic with a max area heuristic is what we really want to see with this strategy, and we do see significant improvements compared to a pure deep-path heuristic. However, this approach still does not best the GLA algorithm. While it may be easier for a human to play with this strategy, it's not the best one when you have the raw computational power of a computer playing the game.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Path-Area Hybrid | 35 | 44.2 | 54 | 3.5 |

Path-Area Hybrid Look-Ahead-2 | 34 | 40.8 | 49 | 3.0 |

Path-Area Hybrid Look-Ahead-3 | 31 | 39.0 | 47 | 3.2 |

Path-Area Hybrid Look-Ahead-4 | 32 | 38.7 | 45 | 2.7 |

#### The Graph Algorithms

**Breadth-First Search (BFS)**When you picture the color choices available for each sequential move in the game, the moves can easily be drawn as a graph with start and end vertices. A path from start to end corresponds to a set of moves that makes up a game, and there are a number of algorithms available for searching this graph. One basic algorithm is BFS, which searches each vertex that's one edge away, then two edges away, and so on until the end vertex is found. Because the graph is unmanageably large, we need to give BFS a head start with GLA-5 for the first moves. With this hybrid algorithm, we actually best GLA-5 by itself, on average, and we have a new best algorithm to beat.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

BFS with Greedy Look-Ahead-5 | 26 | 32.7 | 40 | 2.8 |

**Depth-First Search (DFS)**Instead of searching the move graph in order of how far away the moves are, we can follow each path of moves to its end before backing up and going down the next path. This strategy is depth-first instead of breadth-first, and we need to be careful to limit the search so that it doesn't take forever on this type of graph. We can limit the search to only follow paths that remove blocks on every move, we can stop the search on a particular path if we've already found one that's shorter, and we can run GLA-5 first to shrink the graph that we end up searching with DFS. With these limitations we get pretty good performance, although not quite as good as BFS or GLA.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

DFS with Greedy Look-Ahead-5 | 25 | 34.8 | 43 | 3.9 |

**Dijkstra's Algorithm**While both BFS and DFS would search the entire graph if we didn't limit them in some way, it is possible to optimize the search to try to find a nearly shortest path more quickly. Dijkstra's algorithm does just that by assigning a value to each vertex in the graph that it comes to and putting those vertices in a priority queue. Then it will pull the maximum-valued vertex out of the queue, assess the children of that vertex, and put them in the queue. The algorithm continues until it finds the end vertex, and it does pretty well against the competition. Dijkstra's algorithm is especially consistent at finding a good solution, with a low standard deviation between games.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Dijkstra's Algorithm | 29 | 33.1 | 40 | 1.9 |

**GLA-Dijkstra Hybrid**We can pair Dijkstra's algorithm with other algorithms to make new hybrids and try to improve performance further. Pairing it with the powerful GLA algorithm ends up besting every other algorithm we've explored. The ultimate strategy seems to be to clear about half the board with moves that maximize the number of blocks removed, then carefully select the optimal path to the end of the game with Dijkstra's algorithm. We could further optimize this algorithm by tweaking the switch-over point between the two sub-algorithms.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

GLA-Dijkstra Hybrid | 25 | 31.8 | 37 | 2.2 |

**Dijkstra-GLA Hybrid**Another idea was to try switching the order of GLA and Dijkstra's algorithm, and seeing how that performed. Alas, it did not do as well, so it appears that Dijkstra's algorithm really shines at optimizing moves at the end of the game while GLA is best at clearing the way at the beginning of the game.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Dijkstra-GLA Hybrid | 28 | 36.3 | 44 | 3.1 |

**Max-Perimeter-Dijkstra Hybrid**For one last attempt at another hybrid algorithm, we looked at combining the max perimeter heuristic with Dijkstra's algorithm. This strategy makes some sense by trying to increase the possibilities of clearing blocks in the future by maximizing the perimeter of blocks adjacent to the empty zone, and then searching for that optimal path to the end. It ended up working pretty well, doing better than GLA and BFS in the worst case, but falling slightly short on boards that had potential for less moves.

Min | Mean | Max | Stdev | |
---|---|---|---|---|

Max-Perimeter-Dijkstra Hybrid | 27 | 32.8 | 38 | 2.3 |

#### A Big Solution Space

We've summarized 20 or more algorithms here, depending on how you count them, and that's far from a complete list of possibilities. Other combinations of these algorithms could be explored, we could come up with other heuristics to help decide on move values, and we could mix-and-match more algorithms in complex and intricate ways with varying parameters to try to find that one perfect move sequence for each game. The solution space is incredibly large.

Exploring all of these different algorithms teaches us a lot about what the solution space looks like, and how the problem (the Color Walk game in this case) behaves when attacked in different ways. We've seen that some foresight is important, making good assessments of the progress in the game on any given move is quite helpful, and combining strengths of different algorithms works well. And we can do much better than random or a simple round-robin approach!

We also have developed an understanding of a number of classic algorithms and learned some good problem-solving strategies. The articles behind all of these algorithm summaries go into much more depth and detail on each of them, and they show how to develop an algorithm within the constraints of a specific problem. You can also look at the complete code here, and experiment with it for yourself. Have some fun with it, and see if you can beat the GLA-Dijkstra Hybrid algorithm.

**Article Index**Part 1: Introduction & Setup

Part 2: Tooling & Round-Robin

Part 3: Random & Skipping

Part 4: The Greedy Algorithm

Part 5: Greedy Look Ahead

Part 6: Heuristics & Hybrids

Part 7: Breadth-First Search

Part 8: Depth-First Search

Part 9: Dijkstra's Algorithm

Part 10: Dijkstra's Hybrids

Part 11: Priority Queues

Part 12: Summary

## 0 Response to "Explore Simple Game Algorithms with Color Walk: Part 12"

## Post a Comment