Full text of the second edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2017 is now available.

#### 5.1.2.2 The Computer's View of Semantics

The knowledge base designer who provides information to the system has an
intended interpretation and interprets symbols according to that
intended interpretation. The designer states knowledge, in terms of propositions,
about what is *true* in the intended interpretation. The computer does
not have access to the intended interpretation - only to the
propositions in the knowledge base. As will be shown, the computer is able to tell if some
statement is a logical consequence of a knowledge base. The intended
interpretation is a model of the axioms if the knowledge base designer
has been truthful according to the meaning assigned to the symbols. Assuming the intended
interpretation is a model of the knowledge base, if a
proposition is a logical consequence of the knowledge base, it is *true* in
the intended interpretation because it is *true* in all models of
the knowledge base.

The concept of logical consequence seems like exactly the right tool to derive
implicit information from an axiomatization of a world. Suppose *KB*
represents the knowledge about the intended interpretation; that is,
the intended interpretation is a model of the knowledge base, and that
is all the system knows about the intended interpretation. If
*KB g *, then *g* must be *true* in the intended
interpretation, because it is true in all models of the knowledge base.
If *KB g * - that is, if *g* is not a logical consequence of *KB* -
a model of *KB* exists in which *g* is *false*. As far as the
computer is concerned, the intended interpretation may be the model
of *KB* in which *g* is *false*, and so it does not know whether *g* is
*true* in the intended interpretation.

Given a knowledge base, the models of the knowledge base correspond to all of the ways that the world could be, given that the knowledge base is true.

**Example 5.3:**Consider the knowledge base of Example 5.2. The user could interpret these symbols as having some meaning. The computer does not know the meaning of the symbols, but it can still make conclusions based on what it has been told. It can conclude that

*apple_is_eaten*is true in the intended interpretation. It cannot conclude

*switch_1_is_up*because it does not know if

*sam_is_in_room*is true or false in the intended interpretation.

If the knowledge base designer tells lies - some axioms are false in the intended interpretation - the computer's answers are not guaranteed to be true in the intended interpretation.

It is very important to understand that, until we consider computers with perception and the ability to act in the world, the computer does not know the meaning of the symbols. It is the human that gives the symbols meaning. All the computer knows about the world is what it is told about the world. However, because the computer can provide logical consequences of the knowledge base, it can make conclusions that are true in the intended interpretation.