CPSC 322 - Lecture 3 - September 13, 2004

CPSC 322 - Lecture 3

The Reasoning and Representation System


I.  The Diagnostic Assistant

Our textbook offers three different examples of intelligent agents,
and these three examples are referred to in chapters to come.  At this
time, we'll focus on just one of the three examples:  the diagnostic
assistant in the domain of electrical wiring.  In this domain, a
diagnostic assistant should be able to do the following things:

  Derive the effects of faults

  Search through the space of possible faults

  Explain its reasoning to its human users

  Derive possible causes for symptoms and rule out other causes 
  based on the symptoms

  Plan courses of tests and repairs to address the problems

  Learn about what symptoms are associated with the faults, the
  effects of repairs, and the accuracy of tests

Each of these skills corresponds to a chapter in our textbook; the
authors list some other skills as well, but those skills correspond
to chapters we won't be able to cover in this term.  If you're 
interested, though, feel free to read those chapters too.

By the end of this lecture, we'll have made our first attempt at
building a diagnostic assistant, but our assistant will be somewhat
underpowered when compared to the list of abilities above.  In fact,
about the only thing our assistant will be able to do today is tell
us whether one particular light bulb is on.  If our assistant did 
exhibit the skills shown above, we could argue that the assistant is an
intelligent agent.  Recall from the previous lecture that an intelligent
agent is a system that (1) acts appropriately for its circumstances and 
its goal, (2) is flexible to changing environments and changing goals, 
(3) learns from experience, and (4) makes appropriate choices given 
perceptual limitations and finite computation.  That agent exists in some 
environment, and takes as input prior knowledge about the world, past 
experience from which it can learn, goals that it must try to achieve or 
values about what is important, and observations about the current 
environment.  The output comes in the form of actions that are reasonable 
given the inputs.

Inside the intelligent agent is what's called a Reasoning and 
Representation System, or RRS.  The RRS is includes a language for
communicating with the computer, a way of assigning meaning to the
"sentences" in the language, and procedures or algorithms for 
computing answers based on the input presented in the aforementioned
language.  The language is provided by someone like yourself--a young 
AI student--based on the task domain.  The way of giving meaning to 
expressions in that language comes from the same source.  In this course,
the mechanism for computing answers has been provided for you.  And if
this paragraph and the previous one seem familiar, it's because they've
been taken from the notes for the previous lecture.


II.  How to Make a Reasoning and Representation System

The process of building the RRS can be decomposed into five steps, 
summarized as follows:

1.  Begin with a task domain that you want to characterize.  This is
    a domain in which you want the computer to be able to answer
    questions.

In the case of our diagnostic assistant, we simplified the problem by
concentrating on just one part of the wiring diagram.  That part 
is the circuit that starts with the outside power, runs through
circuit breaker cb1 and then through wire w3 to switch s3 (which 
was not labeled on the diagram used in the PowerPoint slides).  From
s3 the circuit follows wire w4 to light bulb l2.

2.  Distinguish the things you want to talk about in the domain -- the
    ontology.  This includes individuals, properties or attributes
    of individuals, and relationships among individuals.

The individuals we've chosen to talk about are the outside power line,
the circuit breaker, the two wires, the switch, and the light bulb.
Circuit breakers and switches have the property of being open or closed
(despite what the legend on the diagram says), light bulbs can be on
or off, and wires can be hot (i.e., current is flowing) or not_hot.
For reasons of sheer simplicity, we didn't bother with relationships
between individuals like connectedness.

3.  Use symbols in the computer to represent the objects and relations
    in the domain.

We came up with outside_power, cb1, w3, s3, w4, and l2 as the symbols
to represent the individuals.  The properties were represented as on,
off, hot, not_hot, open, and closed.

4.  Tell the computer the knowledge about that domain.

That knowledge looked like this:

  on(l2) <- hot(w4).
  hot(w4) <- closed(s3) & hot(w3).
  hot(w3) <- closed(cb1) & hot(outside_power).
  hot(outside_power).
  closed(cb1).
  closed(s3).

5.  Finally, ask the RRS a question which initiates reasoning to solve
    problems, produce answers, or generate actions.

At first glance, the fifth and final step might come as a bit of a 
surprise as we haven't addressed the issue of how the procedures for
reasoning work or how they're created.  It's almost as if we're to 
assume that the reasoning mechanism has already been provided and it's
just sitting there waiting for us.  It just so happen that the reasoning
mechanism HAS been provided, and it's called CILOG.  When we fired up
the CILOG interpreter and gave it the knowledge base above, we were
ready to ASK it the pressing question, "Is light bulb l2 on?"

cilog: ask on(l2).
Answer: on(l2).
 Runtime since last report: 0 secs.
  [ok,more,how,help]: how.
   on(l2) <-
      1: hot(w4)
   How? [Number,up,retry,ok,prompt,help]: 1.
   hot(w4) <-
      1: closed(s3)
      2: hot(w3)
   How? [Number,up,retry,ok,prompt,help]: 1.
   closed(s3) is a fact
   hot(w4) <-
      1: closed(s3)
      2: hot(w3)
   How? [Number,up,retry,ok,prompt,help]: 2.
   hot(w3) <-
      1: closed(cb1)
      2: hot(outside_power)
   How? [Number,up,retry,ok,prompt,help]: ok.
Answer: on(l2).
 Runtime since last report: 0 secs.
  [ok,more,how,help]: ok.
cilog: 

It's not the most intelligent agent ever produced, but it's a beginning.
Note especially though that we've ignored lots of possibly useful stuff
that could be gleaned from the domain, such as the aforementioned notion
of connectedness.  That's probably a very useful concept in the domain
of electrical wiring, and a full-blown diagnostic assistant would
probably use that concept extensively.

There's plenty of fun ahead.  See you next time.

Last revised: September 17, 2004