In our previous post, we delved into depth-first search and breadth-first search algorithms. One notable drawback of these approaches is their blind exploration until a solution is found. Now, let’s examine the kind of algorithms we can use when we have information about the goal that we can leverage.
I want to gain a better understanding of how AI actually works. As a first step, I’ve enrolled in the HarvardX: CS50’s Introduction to Artificial Intelligence with Python course on Edx. To better learn, I will be applying the the techniques of Nobel Prize winner Richard Feynman and try to teach the concepts I’m learning while I’m learning them. That is what these blogposts will be about. You can find the whole CS50 AI-course straight from FreeCodeCamp on Youtube or you can do it on Edx.
Table of contents
Open Table of contents
Greedy best-first search
The first informed search strategy we’ll explore is greedy best-first search (GBFS). This strategy relies on something called a heuristic function. The heuristic makes an approximation of a state’s proximity to the goal state. More precisely, it is a function h(n) that takes the state as input and returns an estimate of how near we are to the goal.
So, greedy best-first search expands the node that the heuristic estimates to be closest to the goal. It’s essential to recognize that this is an estimation because if we already knew definitively that it was the closest to the goal, we would have already found the solution.
To illustrate how greedy best-first search works, let’s consider the example of a maze. In a maze, we might know in what direction the goal is, and this is something we can use to help us find the correct path quicker compared to if we would use the depth-first or breadth-first search strategies.
Specifically, in a maze, we can use something called the Manhattan distance as a heuristic. The Manhattan distance gives every state a geographical distance to the goal based on the sum of the columns and rows between the two. Or basically, how many steps do we need to take vertically and horizontally to go from the state to the goal (ignoring any walls). It could look something like this:
When the GBFS algorithm gets to the first fork, it can make an informed choice which to choose. DFS would have explored one of the two at random as deep as possible while BFS would have explored both branches one level at a time. Greedy best-first search uses the heuristic — in this case, Manhattan distance — to evaluate the nodes. One has a distance of 13 and the other 11, and the algorithm chooses the one closer to the goal.
This works well enough for most of the maze, but at one point (see image below), we will reach a crossing where using the Manhattan distance will lead us into a dead end. We must remember that the heuristic is an estimate, and it doesn’t know the shortest route.
After reaching the dead end, the algorithm will once again prioritize the node it perceives to be closest to the goal among the available nodes and ultimately reach the goal.
In this case it is a much more optimal solution compared to DFS or BFS. However, it’s important to recognize that the effectiveness of GBFS depends on the quality of our heuristic. We need to develop a good heuristic.
But is this an optimal maze-solving algorithm? Will it always find the shortest path? Let’s examine another maze:
Applying the GBFS algorithm with the Manhattan distance heuristic, we would choose to go down the right path in the first intersection. The heuristic estimates the distance as 11, which is smaller than 13, and all the subsequent nodes also have distances smaller than 13. However, this path is clearly not the shortest.:
This is why GBFS is called a greedy algorithm. While it consistently makes the best local decisions at each stage, it may not always result in an optimal solution.
A* search
How could we solve this? One approach would be to incorporate the distance traveled. Along the path selected by the GBFS algorithm, we encounter a node with a heuristic value of 12 at the halfway point. While this value is lower than the 13 at the first intersection, we must travel a considerable distance to reach it. Rather than always choosing the node that looks like it heuristically will be closest to the goal, we could decide to choose a node that might look like it will be a bit further from the goal if it is a node we can reach in fewer steps.
This concept underpins an algorithm known as A* search. In A* search, nodes are expanded based on the lowest value of g(n) + h(n), where g(n) represents the cost to reach the node, and h(n) denotes the estimated cost to reach the goal, as previously discussed.
When applying this algorithm, we initially proceed along the same path as the GBFS algorithm. However, as we progress, we reach a point where the cost of continuing along the suboptimal path becomes too expensive:
Continuing along the same path we would get 15+6 for a total cost of 21. Contrasting that to choosing the other path at the intersection, which would yield a total cost of 19 (6+13), it becomes evident that opting for the latter is more advantageous. The following nodes on this path will all give us a cost of 21, but this will be lower than the increasing cost of the nodes on the original path:
So we did initially explore another path, but in the end, we found the optimal solution, which was the goal. The A* algorithm will find the optimal solution as long as some conditions are met. Firstly, the heuristic h(n) must be admissible. This means that the heuristic must never overestimate the true cost. It is fine if it underestimates, assuming the goal’s proximity is closer than reality, but it must not overestimate. Secondly, h(n) must be consistent. Mathematically, this can be described as for every node n and successor n’ with step cost c the following condition must hold true: h(n) ≥ h(n’) + c. In simpler terms, the heuristic value at a state shouldn’t exceed the heuristic value of its successor at the subsequent step, plus the associated step cost.
This is the challenge with A* search, finding a good heuristic that helps us find a solution in as few steps as possible but also meets the conditions specified above.
Up to this point, we have only looked at search problems where there is only one agent. In the next post, we will take a look at adversarial search, where there is another agent working against use while we are trying to find a solution.