5.4 Knowledge Representation Issues

5.4.3 Knowledge-Level Explanation

The explicit use of semantics allows explanation and debugging at the knowledge level. To make a system usable by people, the system cannot just give an answer and expect the user to believe it. Consider the case of a system advising doctors who are legally responsible for the treatment that they carry out based on the diagnosis. The doctors must be convinced that the diagnosis is appropriate. The system must be able to justify that its answer is correct. The same mechanism can be used to explain how the system found a result and to debug the knowledge base.

Three complementary means of interrogation are used to explain the relevant knowledge: (1) a how question is used to explain how an answer was proved, (2) a why question is used to ask the system why it is asking the user a question, and (3) a whynot question is used to ask why an atom was not proven.

To explain how an answer was proved, a “how” question can be asked by a user when the system has returned the answer. The system provides the definite clause used to deduce the answer. For any atom in the body of the definite clause, the user can ask how the system proved that atom.

The user can ask “why” in response to being asked a question. The system replies by giving the rule that produced the question. The user can then ask why the head of that rule was being proved. Together these rules allow the user to traverse a proof or a partial proof of the top-level query.

A “whynot” question can be used to ask why a particular atom was not proven.

How Did the System Prove an Atom?

The first explanation procedure allows the user to ask “how” an atom was proved. If there is a proof for g, either g must be an atomic clause or there must be a rule

ga1ak

such that each ai has been proved.

If the system has proved g, and the user asks how in response, the system can display the clause that was used to prove g. If this clause was a rule, the user could then ask

how i.

which will give the rule that was used to prove ai. The user can continue using the how command to explore how g was proved.

Example 5.14.

In the axiomatization of Example 5.7, the user can ask the query 𝖺𝗌𝗄 lit_l2. In response to the system proving this query, the user can ask how. The system would reply:

lit_l2
    light_l2
    live_l2
    ok_l2.

This is the top-level rule used to prove lit_l2. To find out how live_l2 was proved, the user can ask

how 2.

The system can return the rule used to prove live_l2, namely,

live_l2
    live_w4.

To find how live_w4 was proved, the user can ask

how 1.

The system presents the rule

live_w4
    live_w3
    up_s3.

To find how first atom in the body was proved, the user can ask

how 1.

The first atom, live_w3, was proved using the following rule:

live_w3
    live_w5
    ok_cb1.

To find how the second atom in the body was proved, the user can ask

how 2.

The system will report that ok_cb1 is explicitly given.

Notice that the explanation here was only in terms of the knowledge level, and it only gave the relevant definite clauses it has been told. The user does not have to know anything about the proof procedure or the actual computation.

A method to implement how questions is presented in Section 14.4.5.

Why Did the System Ask a Question?

Another useful explanation is for why a question was asked. This is useful for a number of reasons:

  • We want the system to appear intelligent, transparent and trustworthy. Knowing why a question was asked will increase a user’s confidence that the system is working sensibly.

  • One of the main measures of complexity of an interactive system is the number of questions asked of a user, which should be kept to a minimum. Knowing why a question was asked will help the knowledge designer reduce this complexity.

  • An irrelevant question is usually a symptom of a deeper problem.

  • The user may learn something from the system by knowing why the system is doing something. This learning is much like an apprentice asking a master why the master is doing something.

When the system asks the user a question (q), there must be a rule used by the system that contains q in the body. The user can ask

𝗐𝗁𝗒.

This is read as “Why did you ask me that question?” The answer can be the rule that contains q in the body. If the user asks why again, the system should explain why the atom at the head of the rule was asked, and so forth. Repeatedly asking why will eventually give the path of subgoals to the top-level query. If all of these rules are reasonable, this justifies why the system’s question to the user is reasonable.

Example 5.15.

Consider the dialog of Example 5.13. The following shows how repeated use of why can repeatedly find higher-level subgoals. The following dialog is for the query 𝖺𝗌𝗄 lit_l1, with user asking the initial query, and responding with “why”.

ailog: ask  lit_l1.
Is up_s1 true? why.
up_s1 is used in the rule live_w1live_w3up_s1: why.
live_w1 is used in the rule live_w0live_w1up_s2: why.
live_w0 is used in the rule live_l1live_w0: why.
live_l1 is used in the rule lit_l1light_l1live_l1ok_l1: why.
Because that is what you asked me!

Typically, how and why are used together; how moves from higher-level to lower-level subgoals, and why moves from lower-level to higher-level subgoals. Together they let the user traverse a proof tree, where nodes are atoms, and a node together with its children corresponds to a clause in the knowledge base.

Example 5.16.

As an example of the need to combine how and why, consider the previous example where the user asked why up_s1. The system gave the following rule:

live_w1live_w3up_s1.

This means that up_s1 was asked because the system wants to know live_w1 and is using this rule to try to prove up_s1. The user may think it is reasonable that the system wants to know live_w1 but may think it is inappropriate that up_s1 be asked because the user may doubt that live_w3 should have succeeded. In this case it is useful for the user to ask how live_w3 was derived.