It would be useful to find an example where these knowledge-based agents would be particularly useful.
“Wumpus World” is a cave, with rooms connected by passageways.
Somewhere in this cave… is the terrible wumpus, a CR 25 monster who instantly eats anyone who comes into its room.
The Wumpus can be slain by an enchanted arrow, of which our agent only has one…
Some rooms contain pit traps that will trap anything that enters the room (except the Wumpus)
Why would we be in this cave at all? There’s said to be a heap of gold in these caves!
So, let’s describe our task environment (in terms of PEAS):
A typical Wumpus World
Let’s talk about the type of environment this is.
We could say that the location of the pits and Wumpus are unobserved parts of the world which, when observed, could be used with a transition model to complete the knowledge.
Or, we could say the transition model is unknown (we don’t know if an action will kill us or not), and finding out completes the knowledge of the transition model.
The primary difficulty is dealing with the uncertain environment using logical reasoning.
Usually, it’s possible to navagate the Wumpus World safely.
Sometimes, there’s serious risk involved. We could either risk death or just go home empty-handed.
Roughly \(21\%\) of environments are utterly unfair and the gold is surrounded by pits (unreachable).
Now… let’s see it in action!
- \([None,None,None,None,None]\) - \([None,Breeze,None,None,None]\)
- \([Stench,None,None,None,None]\) - \([Stench,Breeze,Glitter,None,None]\)
Note, that as long as our inputs (percepts) are accurate, our reasoning is guaranteed to be correct.
We’ll talk about the fundamental aspects of logic, independent of the particular form of logic we’re dealing with (we’ll get into the technical details later).
We’ve discussed how logic consists of sentences, expressed according to the syntax of the representation language (so all sentences are well formed).
e.g. \(x+y=4\) is well formed, \(x4y+=\) is not…
A logic must also have semantics, the meaning of sentences, which define truth with respect to each possible world.
Give me an algebraic expression.
When is this true?
Must it always be either true or false?
When we need to be precise, we use the term model rather than “possible world”.
Even though any possible world might be real, they’re mathematical abstractions which have some fixed truth value for any relevant(tm) sentence.
One may think of a possible world as \(f+s+j+S=15\), where there are \(f\) freshmen, \(s\) sophomores, etc. taking this class and the sentence is true when the total number of people is \(15\).
Formally, the possible models is all possible assignments of nonnegative integers to our variables.
Each assignment determines the truth of any sentence whos variables are \(f,s,j,S\).
If a sentance \(\alpha\) is true in model \(m\), we say \(m\) satisfies \(\alpha\) or \(m\) is a model of \(\alpha\).
\(M(\alpha)\) is the set of all models of \(\alpha\).
Okay, now that we have truth figured out (lol), we can talk about logical reasoning.
Entailment is the idea that a sentence follows logically from another sentence. \[ \alpha \models \beta \] Alpha models Beta, or Alpha entails Beta. Alpha entails Beta iff, every model in which Alpha is true, Beta is true. \[ \alpha \models \beta\ \mathbf{iff}\ M(\alpha) \subseteq M(\beta) \]
Consider the second part of our first picture of the Wumpus exploration:
The agent detected nothing in \([1,1]\) and a breeze in \([1,2]\). These percepts, with the knowledge of the rules of WW, constitute the KB.
The agents wants to know which adjacent squares (\([1,2],[2,2],[3,1]\)) contain pits.
Each may or may not contain a pit so, (taken by itself) there are \(2^3=8\) possible models.
Possible models
We can think of the KB as a set of sentences or as a single sentence that asserts all of the individual sentences.
The KB is false in models that contradict what the agent knows. e.g. in a model in which \([1,2]\) has a pit, KB is false (why?).
Consider two possible conclusions, \[ \alpha_1=\textrm{There is no pit in [1,2]} \] and \[ \alpha_2=\textrm{There is no pit in [2,2]} \]
We can introspect and discover some things…
In every model in which KB is true \(\alpha_1\) is also true. Therefore \(KB \models \alpha_1\)
Also, there are some models in which KB is true, \(\alpha_2\) is false. So the model cannot conclude that KB models \(\alpha_2\).
In this way, models can derive conclusions (logical inference) via model checking. That is, enumerating all possible models to check that \(\alpha\) is true in which KB is true… \(M(KB)\subseteq M(\alpha)\)
A good analogy is: one can think of the set of all consiquences of KB as a haystack and \(\alpha\) as a needle. Entailment is like being in the haystack; inference is like finding it.
If an inference algorithm \(i\) can derive \(\alpha\) from KB, we write: \[ KB \vdash_i \alpha \] “\(\alpha\) is derived from KB via \(i\)”
An inference algorithm that derives only entailed sentences is called sound or truth-preserving. The alternative is called “making things up” (model checking is sound)
Completeness is also an important property, an inference algorithm is complete when it can derive any entailed sentence.
Real haystacks are finite (right?), but some KBs have an infinite number of consequences. Though there are many sufficiently-expressive algorithms out there…
The key takeaway is this:
If KB is true in the real world, then any sentence \(\alpha\) derived from KB by a sound inference procedure is also true in the real world.
One last issue is grounding, which is the connection between logical reasoning processes and the real environment (where the agent exists).
How do we know that KB is true in the real world?
Sensors connect the agent to the world.
The percept sentences are given their meaning and truth (insofar as they have them) by the sensors and the sentence construction process.
What about the other knowledge? Information like “Wumpus cause smells in adjacent squares”?
General rules (like the Wumpus smell) are derived via a sentence construction process called learning.
Is learning always infallible? … So, is the KB always correct … So what do we do?