CPSC 322 - Lecture 2 - September 10, 2004

CPSC 322 - Lecture 2

Meet the Intelligent Agent


I.  Basic assumptions of AI

We began by revisiting Charniak and McDermott's definition
of artificial intelligence, which included this juicy tidbit:

  The fundamental working assumption, or "central dogma" of AI
  is this:  What the brain does may be thought of at some level
  as a kind of computation.

It's an essential assumption, for without it there would seem
to be little reason to try to extract intelligent behavior from
our computers.  But comparatively speaking, it's a weak assumption.
While it may be wrong, it's probably not.  At the very least, human
language processing suggests that a significant portion of human
intelligence revolves around the manipulation of a finite set of 
symbols to produce or understand an infinite number of utterances.

A stronger claim or assumption comes from Allen Newell and Herb Simon,
two more of the founding fathers of artificial intelligence.  They put
forth what is called the Physical Symbol System Hypothesis in their
1975 ACM Turing Award Lecture (sort of the Nobel Prize for computer 
science):

  A physical symbol system has the necessary and sufficient means for 
  general intelligent action.

They then explain that:

  By "necessary" we mean that any system that exhibits general intelligence
  will prove upon analysis to be a physical symbol system.  By "sufficient"
  we mean that any physical symbol system of sufficient size can be 
  organized further to exhibit general intelligence.  By "general
  intelligent action" we wish to indicate the same scope of intelligence
  as we see in human action:  that in any real situation behavior 
  appropriate to the ends of the system and adaptive to the demands of
  the environment can occur, within some limits of speed and complexity.

(As an aside that we didn't talk about in class, note that Newell and
Simon in the same talk posit the Heuristic Search Hypothesis, which
will be significant when we start reading Chapter 4 in our textbook.
Remember, you read it here first:

  The solutions to problems are represented as symbol structures.  A 
  physical symbol system exercises its intelligence in problem solving 
  by search -- that is, by generating and progressively modifying symbol 
  structures until it produces a solution structure.

If you want to read the whole Turing Award Lecture, you'll find it at
http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/PSS/PSSH1.html.)

If you combine the Physical Symbol System Hypothesis with the 
Church-Turing Thesis (that's Alonzo Church and Alan Turing, who, by
the way, happen to be hugely important people in computing), which
is simplified as:

  Any symbol manipulation can be carried out on a Turing machine.

you end up with the fairly strong position that any equivalent of
a Turing machine (for example, your computer) is capable of
manipulating symbols and, therefore, is capable of intelligent behavior.

Not everyone in artificial intelligence agrees with the Physical
Symbol System Hypothesis.  For example, some researchers in the
world of natural language processing are successfully employing 
purely statistical methods in some of the tasks involved in 
understanding language.  Others are looking at small "processors" of
limited computing power, such as ants, bees, and even neurons.
Individually, these "processors" don't do much, nor is there any
evidence that what they are doing is symbol manipulation.  Yet 
large collections of these itty bitty computers can build intricate
nests, tend to the welfare of larvae in the hive, and give rise
to what we call consciousness in humans.

All of these approaches, both symbolic and non- or sub-symbolic 
have encountered obstacles which have yet to be surmounted.  So
we pick a path and continue exploring, and in this case we choose
the path of symbol manipulation.


II.  The Intelligent Agent

After some discussion of weighty philosophical issues such as 
intelligence, thought, and the meaning of life, we introduced the
notion of an intelligent agent, which is a system that (1) acts
appropriately for its circumstances and its goal, (2) is flexible
to changing environments and changing goals, (3) learns from experience,
and (4) makes appropriate choices given perceptual limitations and
finite computation.  That agent exists in some environment, and takes
as input prior knowledge about the world, past experience from which
it can learn, goals that it must try to achieve or values about
what is important, and observations about the current environment.
The output comes in the form of actions that are reasonable given
the inputs.

Inside the intelligent agent is what's called a Reasoning and 
Representation System, or RRS.  The RRS is includes a language for
communicating with the computer, a means of assigning meaning to the
"sentences" in the language, and procedures or algorithms for 
computing answers based on the input presented in the aforementioned
language.  The language is provided by someone like yourself--a young 
AI student--based on the task domain.  The way of giving meaning to 
expressions in that language comes from the same source.  If you're 
very fortunate, as students in CPSC 322 are, the procedures for computing 
answers have been crafted for you.  On the other hand, if you were
young AI students at some technological institute in the Southeast
United States, you'd be forced to provide those procedures yourself,
and you'd be so mired in programming what are essentially interpreters
for the languages you constructed that you couldn't see the important
principles of artificial intelligence as they flew by.  Some of you
will one day take more AI courses where you'll be writing all that 
additional code, and you'll look back on these days fondly.  Trust me.


III.  Representation

As you might guess from the name, a big chunk of a Reasoning and 
Representation System is the part having to do with Representation.  
It's been said that the biggest obstacle to success in crafting AI
systems is creating the right language for representation; get the 
knowledge representation right, they say, and the reasoning part 
will become obvious.  In our case, where the procedures for 
computing answers have already been chosen and provided, finding the
right language is in some sense the whole problem.

To start the process of crafting a language, we need to decide what things
are in the task domain and how they are related.  As your textbook
wisely points out, "A major impediment to a general theory of CI is
that there is no comprehensive theory of how to appropriately conceive
and express task domains."  In other words, at this time there's more
art than science involved in developing representations, and one gets
better at it by learning from the successes and failures of those who
have gone before you, not to mention your own successes and failures.

Of course, we don't the lack of a comprehensive theory prevent us 
from moving forward...you can't win if you don't play, as they say.  
So the first step is to describe or name those things or individuals 
that exist in our chosen domain.  In addition, we must name the 
properties or attributes of the individuals in the domain, and we must
name the relationship that exist among the individuals.  The catch
is that you could name things, attributes, and relationships forever.
Which ones are important, and which ones can you ignore?  And for the
ones that are important, which of the many ways available will be the
best way to express them.  For example, to denote that a switch is 
allowing current to pass through a circuit, do you say on(switch3) or
status(switch3, on)?  Or instead of "on", would you use the name
"closed", as in closed(switch3) versus open(switch3)?  There are 
different answers depending on what you're trying to accomplish.  Again,
your textbook makes the point nicely:  "Different observers, or even
the same observer with different goals, may divide up the world in
different ways."

Speaking of things like switches and chosen task domains, we introduced
the example of the diagnostic assistant in the domain of electrical
wiring, but we ran out of time before we built up any steam.  We'll
return to the world of wires next time.

Last revised: September 17, 2004