Environments and Agent Archetecture

On Environments

Now that we have a better definition for rationality, we need to think about environments a bit more before we make “rational agents”, let’s talk about task environments

P.E.A.S.

  • Performance

  • Environment

  • Actuators

  • Sensors

Vacuum world was a very simple problem, what about something more complex: “Taxi Driver”

What are our PEAS?

Performance:

  • Safe, fast, legal, comfortable trip, maximize profits, minimize impact on other road users.

Environment:

  • Roads, other traffic, police, pedestrians, customers, weather

Actuators:

  • Steering, accelerator, brake, signal, horn, display, speech

Sensors:

  • Cameras, radar, speedometer, GPS, engine sensors, accelerometer, microphones, touchscreen

Plus more!

PEAS Examples

Properties of Task Environments

Which are the hardest/most complicated environments?

  • Fully Observable vs Partially Observable

  • Single Agent vs Multi Agent

  • Deterministic vs Nondeterministic

  • Episodic vs Sequential

  • Static vs Dynamic

  • Discrete vs Continuous

  • Known vs Unknown

Fully Observable vs Partially Observable

What makes an environment observable?

What makes it fully observable?

When do we have to keep an internal state of our environment?

What could make the environment partially observable?

Single Agent vs Multi-Agent

This seems like a simple distinction right?

What’s an example of a single/multi-agent system?

Which entities may or must be viewed as an agent?

Would it be better to reason about another agent trying to maximize a performance measure or just obeying physics?

Competitive vs cooperative multi-agent environment (or partially competitive).

In some cases randomized behavior becomes rational!

Deterministic vs Nondeterministic

If the next state is totally determined by the current state, we call it deterministic.

In general, we don’t worry about uncertanty in a “fully observable” and “deterministic” environment. But a partially-observable and deterministic environment might appear nondeterministic.

Most situations are so complex that we treat them as being nondeterministic.

Even deterministic environments (like Vacuum World) can easily be made nondeterministic.

(Contrast with Stochastic)

Episodic vs Sequential

Example of episodic tasks?

Example of Sequential tasks?

Which is easier and what does the agent have to do differently?

Static vs Dynamic

If an environment changes while we thinking, it’s dynamic.

Example of a static task? Dynamic?

Semidynamic?

Discrete vs Continuous

This not only refers to the state of the environment, but also time, our percepts, and actions.

Known vs Unknown

This isn’t about observability of the environment, but rather the agent’s (or ours, the designer’s) knowledge about “physics” (i.e. not entirely a property of the environment)

It’s absolutely possible for a known environment to be partially observable, and for an unknown environment to be fully observable.

What case is the Taxi Driver Agent?

  • Partially observable

  • multiagent

  • nondeterministic

  • sequential

  • dynamic

  • continuous

  • unknown

  • Our Taxi Driver is in the hardest case!

  • Also our performance measure might not be fully known!

Environment Characteristics

As an aside, we often care about environment classes, not just particular environments (like different traffic conditions).

Back to Agents

We’ve so far talked about agents in terms of behavior, now we need to talk about the actual innards.

The agent program is the software implementation, while the agent architecture, is the agent program + the physical computing device and physical components.

\[agent\ =\ architecture\ +\ program\]

Of course, the guidance of the program should match the capabilities of the architecture it’s running on!

Table Driven Agent

Let me tell you why this is a bad idea…

Let \(P\) be the set of possible percepts, and let \(T\) be the lifetime of the Agent (total number of percepts)

The lookup table will contain:

\[ \sum^T_{t=1}|P|^t \]

entries…

Consider a camera from our taxi (which might typically have 8), which produces ~70 MB/s (30fps, 1080x720p, 24b of color), which would produce a table of size \(10^{600,000,000,000}\) entries!

Even chess, which is much smaller has at least \(10^{150}\) entries!

(Number of atoms in the observable universe has \(10^{80}\).

The key challenge for AI is to find out how to write programs that, to the extent possible, produce rational behavior from a smallish program rather than from a vast table.

Four basic types of agents:

  • Simple reflex

  • Model-based reflex

  • Goal-based

  • Utility-based

Simple Reflex Agents

Simple Reflex Agent

This has reduced our set of relevant percepts from \(4^T\), to just \(4\) (!)

Reflex behaviors are still relevant even in complex situations!

Simple Reflex Agent

Simple Reflex Agent Pseudocode

These agents are only as good as the observability of their environments.

Model-based Reflex Agents

The best way to deal with partial observability is to keep track of the part of the world that it can’t see (internal state)

A transition model helps us keep track of how the world changes… by itself and with our input

A sensor model helps us account for how inputs chance with respect to the world changeing (brake lights in rain)

Model-based Reflex Agent

Model-based Reflex Agent Pseudocode

It is not often possible for the model to exactly match reality

Goal-based Agents

Sometimes just doing stuff isn’t enough… we actually want to change the world in a particular way that might be multi-step.

Goal-based Agents

A goal based agents need both a goal and a means of search, to find out how actions will lead to that goal.

Utility-based Agents

Goals are often not enough, we care about how things are done. There are other things we consider in our performance measurements

Utility is the general measure of this

A utility function is the agent’s internalization of it’s performance measure (as opposed to an external performance measurement).

This is not the only way to be rational, but it adds lots of flexibility

Model-based, Utility-based Agent

As you might imagine, this is difficult to do

Learning Agents

Turing talked about programming these type systems… and he suggested that it’d be easier in the long run to just make machines that could learn.

A General Learning Agent

How do the Agent Components Work?

States and Transitions

More expressive methods are more concise, but requires more complex methodology.

Chess rules, in pages:

(First-order logic, vs propositional logic, vs atomic representation)

\[ ~2 < ~1000 < ~10^{38} \]