CPSC 322 - Lecture 36 - December 1, 2004

CPSC 322 - Lecture 36

Learning as Search for the Best Representation


I.  More arch learning

In this lecture we finished the classic example of learning
about an arch by the presentation of a series of 
pre-classified (i.e., arch or not arch) examples.  It's 
important that you not learn the wrong thing from this:
in the example you've seen, negative examples involved
differences in links and positive examples involved
differences in nodes.  That's not necessarily always
going to be so, so banish that thought from your mind.

This kind of learning is an example of inductive inference,
which we've talked about in the past.  Inductive inference,
which could also be called learning by induction, is about
generalizing some concept from a set of examples.  As we've
noted in the past, and as your textbook reinforces, it's
the kind of learning that keeps us alive, both as individuals
and as a species.  It may result in us being overly cautious,
but in a hostile world that's probably better than being
overly fearless.  In any case, inductive learning highlights
something that humans do amazingly well and that computers
can't do at all without lots of help.  We, humans, can
figure out from a series of examples what's salient...that
is, we somehow can figure out which features in a set of
examples are important and which can be ignored.  We do
it well, we do it quickly, and we don't really have a good
idea as to how we do it.


II.  Learning as the search for the best representation

Way back at the beginning of the term, we talked about
an intelligent agent as being a Reasoning and Representation
System (RRS).  With formal logic as our constant, 
unchangeable reasoner, anything that our learning 
program is going to do while learning must be represented
as changes to the representation.  That is, a learning
program starts with some initial representation of what
it knows, and as it learns it modifies that representation
to reflect new knowledge.  Thus, learning is about
searching for a better representation than what the
program started with...a representation that includes
what's been learned.


III.  Learning with a different representation

For any given learning task, as with any AI task, there may 
be many ways of representing both old and new knowledge.
An arch learner isn't constrained to just semantic nets
for representing what it knows about arches.  An arch 
learner could, for example, represent that same knowledge
as a decision tree.


IV.  Other issues in learning

Different types of learning give rise to different looks
at the same issues.  Above, we talked about the issue
of salience...how does a learner know what's important.
In a task where a learner wants to learn a sequence
of operations to perform some task, or perhaps a sequence
of moves or decisions to win at some game, the salience
question is recast as one of credit assignment or blame
assignment.  If a sequence of moves leads to success,
which of those moves should be given the most credit
for that success?  If a sequence of moves leads to failure,
which moves should be given the most blame for that 
failure?  These are issues that come up again and again
in the learning world.

Last revised: December 7, 2004