# 5.7 Abduction

Abduction is a form of reasoning where assumptions are made to explain observations. For example, if an agent were to observe that some light was not working, it hypothesizes what is happening in the world to explain why the light was not working. An intelligent tutoring system could try to explain why a student gives some answer in terms of what the student understands and does not understand.

The term abduction was coined by Peirce (1839–1914) to differentiate this type of reasoning from deduction, which involves determining what logically follows from a set of axioms, and induction, which involves inferring general relationships from examples.

In abduction, an agent hypothesizes what may be true about an observed case. An agent determines what implies its observations – what could be true to make the observations true. Observations are trivially implied by contradictions (as a contradiction logically implies everything), so we want to exclude contradictions from our explanation of the observations.

To formalize abduction, we use the language of Horn clauses and assumables. The system is given

• a knowledge base, KB, which is a set of of Horn clauses, and

• a set $A$ of atoms, called the assumables, which are the building blocks of hypotheses.

Instead of adding observations to the knowledge base, observations must be explained.

A scenario of $\left<\mbox{KB},A\right>$ is a subset $H$ of $A$ such that $\mbox{KB}\cup H$ is satisfiable. $\mbox{KB}\cup H$ is satisfiable if a model exists in which every element of KB and every element $H$ is true. This happens if no subset of $H$ is a conflict of KB.

An explanation of proposition $g$ from $\left<\mbox{KB},A\right>$ is a scenario that, together with KB, implies $g$.

That is, an explanation of proposition $g$ is a set $H$, $H\subseteq A$ such that

 $\displaystyle{\mbox{KB}\cup H\models g}$ $\displaystyle{\mbox{KB}\cup H\not\models\mbox{false}.}$

A minimal explanation of $g$ from $\left<\mbox{KB},A\right>$ is an explanation $H$ of $g$ from $\left<\mbox{KB},A\right>$ such that no strict subset of $H$ is also an explanation of $g$ from $\left<\mbox{KB},A\right>$.

###### Example 5.33.

Consider the following simplistic knowledge base and assumables for a diagnostic assistant:

 $\displaystyle{\mbox{bronchitis}\leftarrow\mbox{}\mbox{influenza}.}$ $\displaystyle{\mbox{bronchitis}\leftarrow\mbox{}\mbox{smokes}.}$ $\displaystyle{\mbox{coughing}\leftarrow\mbox{}\mbox{bronchitis}.}$ $\displaystyle{\mbox{wheezing}\leftarrow\mbox{}\mbox{bronchitis}.}$ $\displaystyle{\mbox{fever}\leftarrow\mbox{}\mbox{influenza}.}$ $\displaystyle{\mbox{fever}\leftarrow\mbox{}\mbox{infection}.}$ $\displaystyle{\mbox{sore\_throat}\leftarrow\mbox{}\mbox{influenza}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{smokes}\wedge\mbox{}\mbox{% nonsmoker}.}$ $\displaystyle{\mbox{{assumable}}~{}\mbox{smokes},\mbox{nonsmoker},\mbox{% influenza},\mbox{infection}.}$

If the agent observes wheezing, there are two minimal explanations:

 $\{\mbox{influenza}\}\mbox{ and }\{\mbox{smokes}\}$

These explanations imply bronchitis and coughing.

If $\mbox{wheezing}\wedge\mbox{}\mbox{fever}$ is observed, the minimal explanations are

 $\{\mbox{influenza}\}\mbox{ and }\{\mbox{smokes},\mbox{infection}\}.$

If $\mbox{wheezing}\wedge\mbox{}\mbox{nonsmoker}$ was observed, there is one minimal explanation:

 $\{\mbox{influenza},\mbox{nonsmoker}\}.$

The other explanation of wheezing is inconsistent with being a non-smoker.

###### Example 5.34.

Consider the knowledge base:

 $\displaystyle{\mbox{alarm}\leftarrow\mbox{}\mbox{tampering}.}$ $\displaystyle{\mbox{alarm}\leftarrow\mbox{}\mbox{fire}.}$ $\displaystyle{\mbox{smoke}\leftarrow\mbox{}\mbox{fire}.}$

If alarm is observed, there are two minimal explanations:

 $\{\mbox{tampering}\}\mbox{ and }\{\mbox{fire}\}.$

If $\mbox{alarm}\land\mbox{smoke}$ is observed, there is one minimal explanation:

 $\{\mbox{fire}\}.$

Notice how, when smoke is observed, there is no need to hypothesize tampering to explain alarm; it has been explained away by fire.

Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition. In abductive diagnosis, the agent hypothesizes diseases or malfunctions, as well as that some parts are working normally, to explain the observed symptoms.

This differs from consistency-based diagnosis (CBD) in the following ways:

• In CBD, only normal behavior needs to be represented, and the hypotheses are assumptions of normal behavior. In abductive diagnosis, faulty behavior as well as normal behavior needs to be represented, and the assumables need to be for normal behavior and for each fault (or different behavior).

• In abductive diagnosis, observations need to be explained. In CBD observations are added to the knowledge base, and $false$ is proved.

Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, because the knowledge base has to be able to actually prove the observations from the knowledge base and the assumptions. Abductive diagnosis is also used to diagnose systems in which there is no normal behavior. For example, in an intelligent tutoring system, by observing what a student does, the tutoring system can hypothesize what the student understands and does not understand, which can guide the actions of the tutoring system.

Abduction can also be used for design, in which the query to be explained is a design goal and the assumables are the building blocks of the designs. The explanation is the design. Consistency means that the design is possible. The implication of the design goal means that the design provably achieved the design goal.

###### Example 5.35.

Consider the electrical domain of Figure 5.2. Similar to the representation of the example for consistency-based diagnosis in Example 5.23, we axiomatize what follows from the assumptions of what may be happening in the system. In abductive diagnosis, we must axiomatize what follows both from faults and from normality assumptions. For each atom that could be observed, we axiomatize how it could be produced.

A user could observe that $l_{1}$ is lit or is dark. We must write rules that axiomatize how the system must be to make these true. Light $l_{1}$ is lit if it is ok and there is power coming in. The light is dark if it is broken or there is no power. The system can assume $l_{1}$ is ok or broken, but not both:

 $\displaystyle{\mbox{lit\_l}_{1}\leftarrow\mbox{}\mbox{live\_w}_{0}\wedge\mbox{% }\mbox{ok\_l}_{1}.}$ $\displaystyle{\mbox{dark\_l}_{1}\leftarrow\mbox{}\mbox{broken\_l}_{1}.}$ $\displaystyle{\mbox{dark\_l}_{1}\leftarrow\mbox{}\mbox{dead\_w}_{0}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{ok\_l}_{1}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{broken\_l}_{1}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{ok\_l}_{1}\wedge\mbox{}\mbox{% broken\_l}_{1}.}$

Wire $w_{0}$ is live or dead depending on the switch positions and whether the wires coming in are alive or dead:

 $\displaystyle{\mbox{live\_w}_{0}\leftarrow\mbox{}\mbox{live\_w}_{1}\wedge\mbox% {}\mbox{up\_s}_{2}\wedge\mbox{}\mbox{ok\_s}_{2}.}$ $\displaystyle{\mbox{live\_w}_{0}\leftarrow\mbox{}\mbox{live\_w}_{2}\wedge\mbox% {}\mbox{down\_s}_{2}\wedge\mbox{}\mbox{ok\_s}_{2}.}$ $\displaystyle{\mbox{dead\_w}_{0}\leftarrow\mbox{}\mbox{broken\_s}_{2}.}$ $\displaystyle{\mbox{dead\_w}_{0}\leftarrow\mbox{}\mbox{up\_s}_{2}\wedge\mbox{}% \mbox{dead\_w}_{1}.}$ $\displaystyle{\mbox{dead\_w}_{0}\leftarrow\mbox{}\mbox{down\_s}_{2}\wedge\mbox% {}\mbox{dead\_w}_{2}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{ok\_s}_{2}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{broken\_s}_{2}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{ok\_s}_{2}\wedge\mbox{}\mbox{% broken\_s}_{2}.}$

The other wires are axiomatized similarly. Some of the wires depend on whether the circuit breakers are ok or broken:

 $\displaystyle{\mbox{live\_w}_{3}\leftarrow\mbox{}\mbox{live\_w}_{5}\wedge\mbox% {}\mbox{ok\_cb}_{1}.}$ $\displaystyle{\mbox{dead\_w}_{3}\leftarrow\mbox{}\mbox{broken\_cb}_{1}.}$ $\displaystyle{\mbox{dead\_w}_{3}\leftarrow\mbox{}\mbox{dead\_w}_{5}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{ok\_cb}_{1}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{broken\_cb}_{1}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{ok\_cb}_{1}\wedge\mbox{}\mbox% {broken\_cb}_{1}.}$

For the rest of this example, we assume that the other light and wires are represented analogously.

The outside power can be live or the power can be down:

 $\displaystyle{\mbox{live\_w}_{5}\leftarrow\mbox{}\mbox{live\_outside}.}$ $\displaystyle{\mbox{dead\_w}_{5}\leftarrow\mbox{}\mbox{outside\_power\_down}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{live\_outside}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{outside\_power\_down}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{live\_outside}\wedge\mbox{}% \mbox{outside\_power\_down}.}$

The switches can be assumed to be up or down:

 $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{up\_s}_{1}.}$ $\displaystyle{\mbox{\mbox{{assumable}}~{}}\mbox{down\_s}_{1}.}$ $\displaystyle{\mbox{false}\leftarrow\mbox{}\mbox{up\_s}_{1}\wedge\mbox{}\mbox{% down\_s}_{1}.}$

There are two minimal explanations of $\mbox{lit\_l}_{1}$:

 $\displaystyle{\{\mbox{live\_outside},\mbox{ok\_cb}_{1},\mbox{ok\_l}_{1},\mbox{% ok\_s}_{1},\mbox{ok\_s}_{2},\mbox{up\_s}_{1},\mbox{up\_s}_{2}\}}$ $\displaystyle{\{\mbox{live\_outside},\mbox{ok\_cb}_{1},\mbox{ok\_l}_{1},\mbox{% ok\_s}_{1},\mbox{ok\_s}_{2},\mbox{down\_s}_{1},\mbox{down\_s}_{2}\}.}$

This could be seen in design terms as a way to make sure the light is on: put both switches up or both switches down, and ensure the switches all work. It could also be seen as a way to determine what is going on if the agent observed that $l_{1}$ is lit; one of these two scenarios must hold.

There are ten minimal explanations of $\mbox{dark\_l}_{1}$:

 $\displaystyle{\{\mbox{broken\_l}_{1}\}}$ $\displaystyle{\{\mbox{broken\_s}_{2}\}}$ $\displaystyle{\{\mbox{down\_s}_{1},\mbox{up\_s}_{2}\}}$ $\displaystyle{\{\mbox{broken\_s}_{1},\mbox{up\_s}_{2}\}}$ $\displaystyle{\{\mbox{broken\_cb}_{1},\mbox{up\_s}_{1},\mbox{up\_s}_{2}\}}$ $\displaystyle{\{\mbox{outside\_power\_down},\mbox{up\_s}_{1},\mbox{up\_s}_{2}\}}$ $\displaystyle{\{\mbox{down\_s}_{2},\mbox{up\_s}_{1}\}}$ $\displaystyle{\{\mbox{broken\_s}_{1},\mbox{down\_s}_{2}\}}$ $\displaystyle{\{\mbox{broken\_cb}_{1},\mbox{down\_s}_{1},\mbox{down\_s}_{2}\}}$ $\displaystyle{\{\mbox{down\_s}_{1},\mbox{down\_s}_{2},\mbox{outside\_power\_% down}\}}$

There are six minimal explanations of $\mbox{dark\_l}_{1}\wedge\mbox{}\mbox{lit\_l}_{2}$:

 $\displaystyle{\{\mbox{broken\_l}_{1},\mbox{live\_outside},\mbox{ok\_cb}_{1},% \mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{3}\}}$ $\displaystyle{\{\mbox{broken\_s}_{2},\mbox{live\_outside},\mbox{ok\_cb}_{1},% \mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{3}\}}$ $\displaystyle{\{\mbox{down\_s}_{1},\mbox{live\_outside},\mbox{ok\_cb}_{1},% \mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{2},\mbox{up\_s}_{3}\}}$ $\displaystyle{\{\mbox{broken\_s}_{1},\mbox{live\_outside},\mbox{ok\_cb}_{1},% \mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{2},\mbox{up\_s}_{3}\}}$ $\displaystyle{\{\mbox{down\_s}_{2},\mbox{live\_outside},\mbox{ok\_cb}_{1},% \mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{1},\mbox{up\_s}_{3}\}}$ $\displaystyle{\{\mbox{broken\_s}_{1},\mbox{down\_s}_{2},\mbox{live\_outside},% \mbox{ok\_cb}_{1},\mbox{ok\_l}_{2},\mbox{ok\_s}_{3},\mbox{up\_s}_{3}\}}$

Notice how the explanations cannot include outside_power_down or $\mbox{broken\_cb}_{1}$ because they are inconsistent with the explanation of $l_{2}$ being lit.

Both the bottom-up and top-down implementations for assumption-based reasoning with Horn clauses can be used for abduction. The bottom-up algorithm of Figure 5.9 computes the minimal explanations for each atom; at the end of the repeat loop, $C$ contains the minimal explanations of each atom (as well as potentially some non-minimal explanations). The refinement of pruning dominated explanations can also be used. The top-down algorithm can be used to find the explanations of any $g$ by first generating the conflicts and, using the same code and knowledge base, proving $g$ instead of $false$. The minimal explanations of $g$ are the minimal sets of assumables collected to prove $g$ such that no subset is a conflict.