Skip to content

Inference with Model Checking - How to Give AI Knowledge

Updated: at 16:27

In this post, we will explore the theory behind how we can encode knowledge and apply it to AI. As humans, we possess information and facts about the world. We use this information to draw conclusions and figure things out. However, for AI, this process is not as straightforward.

I want to gain a better understanding of how AI actually works. As a first step, I’ve enrolled in the HarvardX: CS50’s Introduction to Artificial Intelligence with Python course on EdX. To enhance my learning, I will be applying the techniques of Nobel Prize winner Richard Feynman and try to teach the concepts I’m learning while I’m learning them. That is what these blog posts will be about. You can find the whole CS50 AI-course straight from FreeCodeCamp’s Youtube channel or on EdX.

An illustration of a detective.

Table of contents

Open Table of contents

Knowledge-based agents

A machine cannot utilize knowledge right out of the box. We need to construct knowledge-based agents. A knowledge-based agent possesses an understanding of what it means to know something and can employ algorithms that leverage this knowledge to solve problems or deduce information.

As humans, we can use language to reason about logic. We take some information, reason about it, and make deductions based on that information to reach a conclusion. However, using only language will not be sufficient for machines, and we will need to formalize it to encode notions of truth and falsehood in AI. Similar to how we approached search problems, we need to define some terms and concepts. Additionally, we will also need to learn a few new symbols.

Propositional logic

The first term we need to understand is sentence. In the context of AI, a sentence is an assertion about the world expressed in a knowledge representation language. A knowledge representation language is simply a method by which we represent knowledge in a computer or AI.

Our main focus will be on a type of logic called propositional logic. This form of logic is based on propositions, or statements about the world. To represent these propositions, we use something called proposition symbols, often just letters (P, Q, R, etc.) that represent some fact.

For example, P might signify ‘it is sunny’, or a similar statement. Each symbol represents a sentence or a fact about the world. Additionally, we need a way to connect these symbols to each other. To do this, we use something called logical connectives. Here are some of the most commonly used ones:

¬ not
∧ and
∨ or
→ implication
↔︎ biconditional

NOT (¬)
Placing the ¬ symbol in front of a statement reverses its meaning. For example, transforming ‘it is sunny’ to ‘it is not sunny’.

A truth table is a tool that enables us to visualize how logical statements (or propositions) can be combined and the resulting truth values of those combinations. The ¬ operation can be depicted in a truth table as follows:

P¬ P
falsetrue
truefalse

This table illustrates how applying the ¬ operator to a proposition inverses its truth value, effectively showcasing the operation of logical negation.

AND (∧)
The AND operator, denoted by ∧, combines two statements. For the combined statement to be true, both individual statements must be true

PQP ∧ Q
falsefalsefalse
falsetruefalse
truefalsefalse
truetruetrue

This truth table demonstrates that the P ∧ Q statement is only true when both P and Q are true. In all other combinations, where at least one of the statements is false, the result of P ∧ Q is false.

OR (∨)
The OR operator, symbolized by ∨, also combines two statements. The result of an OR operation is true if at least one of the two statements is true, including the case where both are true:

PQP ∨ Q
falsefalsefalse
falsetruetrue
truefalsetrue
truetruetrue

This truth table shows that P ∨ Q is true in three scenarios: when P is true, when Q is true, or when both P and Q are true. The only time P ∨ Q is false is when both P and Q are false.

Implication →
This operation can be read as ‘if P, then Q’. It means that if P is true, then Q must also be true for the statement to hold. The truth table for implication is a bit more nuanced than for other operations. Interestingly, if P is false, the implication makes no claim about Q, and the result is still considered true:

PQP → Q
falsefalsetrue
falsetruetrue
truefalsefalse
truetruetrue

This table shows that P → Q is true in three cases: when both P and Q are true, when P is false and Q is false, and when P is false but Q is true. The only scenario where P → Q is false is when P is true but Q is false.

Biconditional ↔︎
The biconditional operation signifies a condition that is mutual or goes in both directions. It is true only when P and Q have the same truth value:

PQP ↔︎ Q
falsefalsetrue
falsetruefalse
truefalsefalse
truetruetrue

This truth table illustrates that P ↔︎ Q is true in two scenarios: when both P and Q are false, and when both are true. Conversely, the statement is false when P and Q have different truth values. The biconditional effectively states that P and Q are equivalent; they either both hold true or both are false.

Models and knowledge base

Next, we need a method to determine what is actually true in the world. For this purpose, we use a model. In a model, we assign a truth value to every propositional symbol, essentially creating a ‘possible world’. The number of different models depends on the number of symbols. With n symbols, there are 2n possible models.

Having established the symbols and connectives necessary to construct knowledge, the final step is representing that knowledge. This is achieved through a knowledge base. A knowledge base is essentially a collection of sentences that our AI recognizes as true. We inform the AI about a specific situation it is encountering. The AI then incorporates this information into its knowledge base, using it to draw conclusions about the world.

Entailment

To understand the types of conclusions AI can make, we need to grasp the concept of entailment.

⍺ ⊨ β

This is read as ‘α entails β’, meaning that in every model where the sentence α is true, the sentence β is also true. Consider a simple example: if I know it is Sunday in April, I can also conclude it is April.

We aim to encode this notion of entailment into our AI systems. By providing the AI with certain sentences, we expect it to use these sentences stored in its knowledge base to infer and draw conclusions. This capability enables AI to make logical deductions and reason about the world based on the information it has been given.

Inference

Inference is the process of deriving new sentences from old ones. We provide the AI with certain sentences, which it stores in its knowledge base. Then, it utilizes an inference algorithm to deduce that some other sentences must also be true.

Let’s consider an example. We define some symbols:
P: It is Wednesday.
Q: It is raining.
R: Staffan will go for a run.

We then give our AI a knowledge base:
KB: (P ∧ ¬Q) → R
This means P and not Q implies R.

Translated into human language, this reads as: ‘If it is Wednesday and it is not raining, then Staffan will go for a run’.

Imagine we have some additional information in the knowledge base:
(P ∧ ¬Q) → R, P, ¬Q

With this information, we can make some inferences. We know that P is true and that ¬Q is true. Therefore, (P∧¬Q) is true, which implies R must also be true. Thus, we can infer that R is true, meaning Staffan will go for a run.

This illustrates what inference algorithms aim to resolve: given a query about the world, denoted as α, does our knowledge base entail α? In other words, can we use the information stored in our knowledge base to conclude that the sentence α is true?

There are various algorithms for achieving this, one of the simplest being model checking.

Model checking

Model checking operates by enumerating all possible models to determine if the knowledge base entails α. It considers all possible values of true and false for the different symbols. If, in every model where the knowledge base is true, α is also true, then we can conclude that the knowledge base entails α.

Let’s illustrate this with our previous example:

We use a simple query: R (is it entailed that Staffan will go for a run?). As mentioned earlier, with 2n possible worlds or 23=8 models, we obtain the following truth table:

PQRKB
falsefalsefalsefalse
falsefalsetruefalse
falsetruefalsefalse
falsetruetruefalse
truefalsefalsefalse
truefalsetruetrue
truetruefalsefalse
truetruetruefalse

In this table, there is only one world where the knowledge base is true. In that model, our query R is also true. This demonstrates that the knowledge base indeed entails R.

This logic serves as a tool that can be applied to numerous problems where logical deduction is useful. We need to consider which symbols represent the information we have and how to encode that information. This process is known as knowledge engineering—the process of distilling a problem into knowledge that can be represented by a computer.

Model checking the game of Clue

Let’s explore a practical example of model checking with the board game Clue, where the objective is to deduce the murderer, the murder location, and the weapon used. For simplicity, we’ll consider only three options in each category:

PeopleRoomsWeapons
Col MustardBallroomKnife
Prof PlumKitchenRevolver
Ms. ScarletLibraryWrench

Three cards, representing a person, room, and weapon, are placed in a secret envelope. Our goal is to determine the combination within.

First, we decide on the propositional symbols for each card: mustard, plum, scarlet, ballroom, kitchen, library, knife, revolver, wrench.

We then add initial knowledge to our knowledge base:

As the game progresses, we gain more information. For instance, if we hold Colonel Mustard, the Kitchen, and the Revolver cards, we know these cannot be in the envelope, so we add:

¬mustard ∧ ¬kitchen ∧ ¬revolver

Our updated knowledge base is:
(mustard ∨ plum ∨ scarlet) ∧ (ballroom ∨ kitchen ∨ library) ∧ (knife ∨ revolver ∨ wrench) ∧ ¬mustard ∧ ¬kitchen ∧ ¬revolver

Suppose a player guesses Ms. Scarlet, in the Library, with the Wrench, and a card is shown (but we don’t see which). We deduce that at least one of these guesses is incorrect:

¬scarlet ∨ ¬library ∨ ¬wrench

With this addition, our knowledge base becomes:
(mustard ∨ plum ∨ scarlet) ∧ (ballroom ∨ kitchen ∨ library) ∧ (knife ∨ revolver ∨ wrench) ∧ ¬mustard ∧ ¬kitchen ∧ ¬revolver ∧ (¬scarlet ∨ ¬library ∨ ¬wrench )

This still doesn’t solve the mystery, but acquiring the Prof Plum card (¬plum) allows the AI to infer Ms. Scarlet is the murderer. Adding ¬ballroom lets us fully solve the puzzle, deducing the knife as the weapon.

While simplistic, this example shows how model checking can solve complex logical deductions. However, model checking is not efficient for large numbers of variables, as it requires examining 2n possibilities. In the next post, we’ll explore more effective inference methods.

Previous posts about CS50AI:

  1. The basics of search - Depth first vs breadth first search
  2. Informed search algorithms - Greedy best-first search and A* search
  3. Adversarial search - Playing Tic Tac Toe with the Minimax algorithm