CPSC 322 - Lecture 4 - September 15, 2004

CPSC 322 - Lecture 4

More About the RRS


I.  Reasoning and representation system details

As we noted last time, a Reasoning and Representation System has three
components:  a language for communications with the computer, a way to
assign meaning to the language, and a set of procedures to compute 
answers to problems represented in the language.  The language itself, 
a formal language, is defined by a grammar which specifies the symbols
in the language and the rules for putting the symbols together to make
sentences to express knowledge about the chosen domain.  The language
defined by the grammar is the set of all sentences that can be generated
from the grammar.  The knowledge base for any given implementation is a
subset of sentences from the language.

The second component of the RRS is semantics: a specification of the
meaning of sentences in the language.  Semantics is your commitment to
how the symbols in the language correspond to the chosen task domain.  
I say it's your commitment because it remains in your head--it's not
in the computer.

The final component is the set of procedures to compute answers, or what's
called the reasoning theory or proof procedure.  Why proof procedure?
Because when you ask your RRS a question, you're really saying "Here's
my theorem, can you prove that it's true?"  The proof procedure is a
(possibly nondeterministic) specification of how an answer can be
derived from the knowledge base.  It often appears as a set of 
inference rules.

Along the way here, we've introduced two new words, one of which many
of you are familiar with, and the other you may not be so familiar with,
at least in the computing context.  The one you may have encountered
if you've taken a computing theory course is nondeterminism.  
Nondeterminism is the theoretical notion that an algorithm, at any
choice point, can always make the right choice.  Just imagine that 
the algorithm can call upon an oracle every time it has to make a 
decision.  Of course, real computers don't come with oracles, so 
nondeterminism is implemented (perhaps simulated is the right word
here) as exhaustive depth-first search with backtracking.  Another 
way to implement nondeterminism would be to explore all possible 
solutions in parallel, but you'd better have a good-sized supply
of processors handy.


II.  Inference

The other new word is inference.  Informally, inference is the
act of generating new knowledge from old knowledge.  When I was a 
somewhat younger AI student, I learned about three general classes 
of inference.  I suppose one could carve the world of inference 
into more and smaller categories, but let's go with these for now.

The category of inference that we'll be employing in this class
is called deductive inference.  Deductive inference is based on
a principle called modus ponens which says, in essence, that if
you have two propositions, P and Q, and you absolutely know 
for sure that if P is true if Q is true (i.e., P <- Q), and you
also know that Q is true, then you can infer that P is true
with absolutely no concern that the inference might be incorrect.
With deductive inference, if you start out with truth (and nothing
but the truth), then any inferences you can make using the 
principle of modus ponens will be true.  This guarantee of 
truth maintenance doesn't hold up for the other two types of
inference.

One of those other types is what's called inductive inference.  
Say for example that you wake up in the morning, it's not raining, 
so you head to work without your umbrella.  As you're walking to
work, it begins to rain, and you're soaked by the time you get
to work.  The next day it is once again not raining, so you
head out without your umbrella, but it begins to rain again as
you're half way to work.  The third day, the same thing happens.
Sooner or later, you'll start bringing your umbrella with you to 
work every day, regardless of what the weather looks like when
you begin your day.  Why?  Because you've made a generalization
about what's likely to happen to you in the future based on a 
series of events that have happened to you in the past...you've
induced a "rule" about what to bring with you when you go to 
work.  Is it always correct?  No, of course not.  Some days
you'll go to work, umbrella in hand, but it doesn't rain.  But
you'll be prepared if it does.  This sort of inference may 
seem like it could lead one to be overly cautious, but being
overly cautious was probably a pretty good thing back thousands
of years ago when early man was trying to compete with big 
hungry predatory animals.

The final category is abductive inference.  Say you know that 
cigarette smoking causes lung cancer.  Then one day, you find
out that someone you know has lung cancer.  What caused it?  Well,
in class you were reluctant to admit that you jumped to the 
conclusion that smoking was the cause, but you did.  You had
something in your brain that says "smoking implies cancer", or
gets_lung_cancer(X) <- smokes_cigarettes(X) in more formal terms.
Abductive reasoning turns the implication arrow around and says,
"If smoking causes lung cancer, then if someone has lung cancer
it's probably true that the person is a smoker."  In this specific
case, it's true more often than not, but it's not always true.
Asbestos, coal dust, air pollution, and who knows what other
airborne carcinogens might have been the cause.  Abduction is
in some sense the inverse of deduction.  And while it seems sort
of dangerous to be making abductive inferences, people do it
all the time.  Just listen to any radio talk show (or read 
explanations of why a certain nation to the south decided it
was appropriate to invade a nation in the Middle East without
provocation) to catch abductive reasoning in action.  We humans 
seem to be wired to find causal connections between events in our
world, even where none exist.  It helps us explain the world
around us and build models that let us figure out what to do
next.  Those models might not be right -- abductive reasoning
is the underpinning of folklore and mythology -- but they might
inspire neat things like scientific inquirey.  Again, it's the kind 
of thing that was probably very helpful thousands of years ago.

The world of artificial intelligence is fraught with rifts over
things like what are the right problems to work on and how best
to work on them.  It's the sort of thing that makes AI so much fun.
One of the big rifts is between the "neats" and the "scruffies".
The neats are the formal reasoning adherents -- they embrace the
deductive inference paradigm.  The scruffies, as the name suggests,
aren't so formal, and promote the use of "commonsense" reasoning
approaches like abductive and inductive inference.  The neats 
argue that the scruffy approach results in vaguely-defined 
principles of intelligence that are not easily verified, and that
the inferences generated by scruffy methods are often false.
The scruffies respond that those comments describe human intelligence
much more accurately than deductive inference does.  They're
both right.


III.  Implementing the RRS

The implementation of a given reasoning and representation system
requires the implementation of two components on the computer.
In order to do something useful with that knowledge base you 
create written in the formal language you invent, your computer
should be home to both (1) a language parser that maps legal
sentences of the formal language to an internal form stored as
data structures in the computer and (2) a reasoning procedure
that combines a reasoning theory with a search strategy.  For 
this class, the CILOG system provides both the language parser
and the reasoning procedures.  And of course all of this is
independent of the semantics that you carry in your head.


IV.  Simplifying assumptions for the RRS

We want to start out small and work our way up to bigger,
more complicated problems, so we'll work under some simplifying
assumptions at first to make our lives easier:

  An agent's knowledge can be usefully described in terms
  of individuals and relations among individuals.

  An agent's knowledge base consists of definite and positive
  statements.  (i.e., nothing vague, no negation)

  The environment is static.  (i.e., nothing changes)

  There are only a finite number of individuals of interest 
  in the domain

Be aware that some of these assumptions will be relaxed as we 
bgo on.


V.  CILOG syntax

We ended the lecture with a discussion of CILOG syntax...that
bis, the rules that constrain what the formal language that you
invent must look like.  Rather than reproduce all the details
here, I'll just refer you to the accompanying PowerPoint slides.

Actually, that wasn't the end of the lecture.  We really finished
with a quick look at the royal family domain, which included
a lovely picture showing how what's in your head is different
from what's in the computer.

Last revised: October 3, 2004