Skip to content

The basics of search - Depth first vs breadth first search

Updated: at 06:46

In this post, we will explore two basic search algorithms: depth first search and breadth first search.

I want to gain a better understanding of how AI actually works. As a first step I’ve enrolled in the HarvardX: CS50’s Introduction to Artificial Intelligence with Python course on Edx. To better learn I will be applying the the techniques of Nobel Prize winner Richard Feynman and try to teach the concepts I’m learning while I’m learning them. That is what these blogposts will be about. You can find the whole CS50 AI-course straight from FreeCodeCamp on Youtube or you can do it on Edx.

A common task we might give an AI is to search for an solution to something. We might want to find the quickest route on google maps or play a game where the AI needs find the path to a winning move. These kinds of problems are called search problems. We are searching for a path that leads to a solution, and we also want that path to be the optimal path.

A simple way to visualize a search problem is a maze. We want to find our way through the maze but at every crossing we have to make a decision on what route to take. There is a correct sequence of actions that will take us from the starting point to the goal and we need to find that sequence.

The following text will explore some approaches to do just that.

Table of contents

Open Table of contents

Terminology

To get going we will need to define some terminology. First we have the concept of an agent. An agent is defined as an entity that in someway can perceive its environment and act upon that environment. An agent can, of course, be a person, but in this context we are thinking of an AI as an agent.

A state is a configuration of the agent and its environment. For example in a game of chess, every time a move has been made we get a new state. Every state might render different actions that are available to the agent. In a search problem we also define an initial state. The state where the agent begins when no actions have yet been taken.

We can define the relationship between actions and state as an mathematical function: Actions(s) returns the actions that are available in a given state s.

But we also need to define how states and actions relate to each other. To do this we need a transition model. A transition model describe what state we get as a result if we take an action that is available in some other state. This can also be defined as a function Result(s, a) which returns the state resulting from performing action a in state s.

With the help of our transition model we define the state space, that is just a set of all the different states we can reach from our initial state. This can be represented as a graph. A graph is a structure made up of nodes that are connected by lines. A little like this picture that a friendly AI generated for me:

An illustration of a node network.

Well maybe not that messy. So how do we know when we have found a solution? Our search algorithm will also need a goal test. We need to somehow be able to determine if a state is a goal state, the state that we want to reach.

We are not only interested in how to reach a goal. We want to find the optimal path to the goal. In order to that we need to keep track of path cost. We need to know what is the cost of a given path.

In the illustration below, we have different states (nodes) that are connected to each other. The number on the line describes the cost of choosing that action.

An illustration of how node can be connected with each other and have different path costs.

In the illustration there is a difference in the cost of different actions. But this is not true for all problems. In many problems the cost for all the actions is the same. This is the case if the only thing we are concerned about is the number of steps we need to take to reach a solution, but the steps themselves are all of equal value. Then the cost for each line could be represented by a constant such as 1.

A basic search algorithm

We now have the basis for a search problem. I earlier mentioned we could describe the state space as interconnected nodes. A node is also a data structure that we can use to store information. When dealing with a search problem we want to the node to store the following values:

Now it is time to define an search algorithm. The way this will work is we will start from a state and then start exploring from there. As we try the options available to the initial state, we will get more options, and hopefully sooner or later we will have found our path to the end goal. In this approach we will be storing the options that are available to us in a data structure called the frontier. The frontier will store the nodes that we could explore next, but that we haven’t yet tried.

At the starting point, the frontier will only contain the initial state. Then we will repeat the following steps:

An illustration of the basic search process.

Looks good, but there are problems with this approach. What would happen if B also pointed back to A? We could end up in a loop where we just hop back and forth from A to B. To solve this we could introduce another data structure that keeps track of all the nodes we have already explored. If we have already been there, we shouldn’t need to explore it again. So we add a step before expanding the node, where we add it to the explored dataset. And we expand the node we will only add it to the frontier if it isn’t in the already explored set.

We are also faced with another dilemma: when we have more than one nodes in the frontier, how do we choose which to pick? In our example we just randomly chose the nodes that lead us straight to the goal. But we need to define how we want to do it. And this turns out to be a crucial choice affecting how our algorithm operates.

We have to make a choice if our frontier is going to be a stack or queue data structure. A stack follows the principle of last in first out and the queue follows the principle of first in first out. But what does that mean for our example. Let’s explore the frontier as a stack first.

With the frontier as a stack we will always remove the node that was last to be added to the stack. This is how it would fold out:

An illustration of the search process with the frontier as a stack.

As we can see having the frontier as a stack means that we will go down deep in a tree to search for the solution. Only after making sure the solution isn’t available in branch D, will we move back up and try branch C. This is why this kind of search is called depth first search (DFS).

Now, let’s see what happens if we let the frontier be a queue. That means we will remove the node that was first added to the frontier first, or in other words the one that has been there longest. Here is an example of how that works:

An illustration of the search process with the frontier as a queue.

The illustration doesn’t have that many nodes so it is maybe not super clear, but here we will always explore one level at a time. We will not go deep in a branch but we will explore all nodes in one level. This broader approach is called breadth first search (BFS).

Pros and cons to DFS and BFS

There are pros and cons to DFS and BFS. If we are lucky DFS might end up immediately exploring the correct branch and we quickly find the solution, which means we will have to expand a small amount of nodes. However, if we are unlucky, DFS might need to go deep in many branches before finding the solution. It will usually need less resources to do so, compared to BFS.

Similarly if the solution is close to the initial state BFS might only need to explore a few levels before finding the solution. But if the solution is far away BFS will need to explore many different levels and expand a big amount of nodes. This can be very memory intensive, as it might need to store many different paths. On the bright side, BFS will always find the optimal solution.

Generally, we can say that BFS usually finds a solution near to the starting point in less time, while DFS will find a solution further away in less time.

If there are many different paths to a solution we might end up in a situation where DFS and BFS return different solutions. DFS could for example take a much longer path to solution because it happened to go down a deep branch that ultimately lead to solution, even though a shorter path was available.

The biggest problem with these strategies is that they lack any intelligence, they just explore without guidance. That is why we call them uninformed search. They are strategies that don’t use any problem-specific knowledge. In the next post, we will explore ways we could do search with informed strategies.