Solving problems by
searching
Outline
|
|
|
Problem-solving agents |
|
Problem types |
|
Problem formulation |
|
Example problems |
|
Basic search algorithms |
Problem-solving agents
Example: Romania
|
|
|
|
On holiday in Romania; currently in
Arad. |
|
Flight leaves tomorrow from Bucharest |
|
Formulate goal: |
|
be in Bucharest |
|
Formulate problem: |
|
states: various cities |
|
actions: drive between cities |
|
Find solution: |
|
sequence of cities, e.g., Arad, Sibiu,
Fagaras, Bucharest |
Example: Romania
Problem types
|
|
|
|
Deterministic, fully observable à single-state problem |
|
Agent knows exactly which state it will
be in; solution is a sequence |
|
Non-observable à sensorless problem (conformant problem) |
|
Agent may have no idea where it is;
solution is a sequence |
|
Nondeterministic and/or partially
observable à contingency problem |
|
percepts provide new information about
current state |
|
often interleave} search, execution |
|
Unknown state space à exploration problem |
Example: vacuum world
|
|
|
Single-state, start in #5. Solution? |
Example: vacuum world
|
|
|
|
|
|
|
Single-state, start in #5.
Solution? [Right, Suck] |
|
|
|
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution? |
Example: vacuum world
|
|
|
|
|
|
|
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck] |
|
|
|
Contingency |
|
Nondeterministic: Suck may
dirty a clean carpet |
|
Partially observable: location, dirt at
current location. |
|
Percept: [L, Clean], i.e., start in #5
or #7
Solution? |
Example: vacuum world
|
|
|
|
|
|
|
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck] |
|
|
|
Contingency |
|
Nondeterministic: Suck may
dirty a clean carpet |
|
Partially observable: location, dirt at
current location. |
|
Percept: [L, Clean], i.e., start in #5
or #7
Solution? [Right, if dirt then Suck] |
Single-state problem
formulation
|
|
|
|
A problem is defined by four items: |
|
|
|
initial state e.g., "at Arad" |
|
actions or successor function S(x) =
set of action–state pairs |
|
e.g., S(Arad) = {<Arad à Zerind, Zerind>, … } |
|
goal test, can be |
|
explicit, e.g., x = "at
Bucharest" |
|
implicit, e.g., Checkmate(x) |
|
path cost (additive) |
|
e.g., sum of distances, number of
actions executed, etc. |
|
c(x,a,y) is the step cost, assumed to
be ≥ 0 |
|
|
|
A solution is a sequence of actions
leading from the initial state to a goal state |
Selecting a state space
|
|
|
|
Real world is absurdly complex |
|
à
state space must be abstracted for problem
solving |
|
(Abstract) state = set of real states |
|
(Abstract) action = complex combination
of real actions |
|
e.g., "Arad à Zerind"
represents a complex set of possible routes, detours, rest stops, etc. |
|
For guaranteed realizability, any real
state "in Arad“ must get to some real state "in Zerind" |
|
(Abstract) solution = |
|
set of real paths that are solutions in
the real world |
|
Each abstract action should be
"easier" than the original problem |
Vacuum world state space
graph
|
|
|
|
|
|
|
|
|
|
|
|
|
|
states? |
|
actions? |
|
goal test? |
|
path cost? |
Vacuum world state space
graph
|
|
|
|
|
|
|
|
|
|
|
|
|
|
states? integer dirt and robot location |
|
actions? Left, Right, Suck |
|
goal test? no dirt at all locations |
|
path cost? 1 per action |
Example: The 8-puzzle
|
|
|
|
|
|
|
|
|
|
|
|
|
|
states? |
|
actions? |
|
goal test? |
|
path cost? |
Example: The 8-puzzle
|
|
|
|
|
|
|
|
|
|
|
|
|
|
states? locations of tiles |
|
actions? move blank left, right, up,
down |
|
goal test? = goal state (given) |
|
path cost? 1 per move |
|
|
|
[Note: optimal solution of n-Puzzle
family is NP-hard] |
|
|
Example: robotic assembly
|
|
|
|
|
|
|
|
|
|
|
|
|
states?: real-valued coordinates of
robot joint angles parts of the object to be assembled |
|
actions?: continuous motions of robot
joints |
|
goal test?: complete assembly |
|
path cost?: time to execute |
Tree search algorithms
|
|
|
|
Basic idea: |
|
offline, simulated exploration of state
space by generating successors of already-explored states (a.k.a.~expanding
states) |
Tree search example
Tree search example
Tree search example
Implementation: general
tree search
Implementation: states
vs. nodes
|
|
|
A state is a (representation of) a
physical configuration |
|
A node is a data structure constituting
part of a search tree includes state, parent node, action, path cost g(x), depth |
|
|
|
|
|
|
|
|
|
|
|
|
|
The Expand function creates new nodes,
filling in the various fields and using the SuccessorFn of the problem to
create the corresponding states. |
Search strategies
|
|
|
|
A search strategy is defined by picking
the order of node expansion |
|
Strategies are evaluated along the
following dimensions: |
|
completeness: does it always find a
solution if one exists? |
|
time complexity: number of nodes
generated |
|
space complexity: maximum number of
nodes in memory |
|
optimality: does it always find a
least-cost solution? |
|
Time and space complexity are measured
in terms of |
|
b: maximum branching factor of the
search tree |
|
d: depth of the least-cost solution |
|
m: maximum depth of the state space
(may be ∞) |
Uninformed search
strategies
|
|
|
Uninformed search strategies use only
the information available in the problem definition |
|
Breadth-first search |
|
Uniform-cost search |
|
Depth-first search |
|
Depth-limited search |
|
Iterative deepening search |
Breadth-first search
|
|
|
|
Expand shallowest unexpanded node |
|
Implementation: |
|
fringe is a FIFO queue, i.e., new
successors go at end |
Breadth-first search
|
|
|
|
Expand shallowest unexpanded node |
|
Implementation: |
|
fringe is a FIFO queue, i.e., new
successors go at end |
Breadth-first search
|
|
|
|
Expand shallowest unexpanded node |
|
Implementation: |
|
fringe is a FIFO queue, i.e., new
successors go at end |
Breadth-first search
|
|
|
|
Expand shallowest unexpanded node |
|
Implementation: |
|
fringe is a FIFO queue, i.e., new
successors go at end |
Properties of
breadth-first search
|
|
|
Complete? Yes (if b is finite) |
|
Time? 1+b+b2+b3+…
+bd + b(bd-1) = O(bd+1) |
|
Space? O(bd+1) (keeps every
node in memory) |
|
Optimal? Yes (if cost = 1 per step) |
|
|
|
Space is the bigger problem (more than
time) |
Uniform-cost search
|
|
|
|
Expand least-cost unexpanded node |
|
Implementation: |
|
fringe = queue ordered by path cost |
|
Equivalent to breadth-first if step
costs all equal |
|
Complete? Yes, if step cost ≥ ε |
|
Time? # of nodes with g ≤ cost of
optimal solution, O(bceiling(C*/ ε))
where C* is the cost of the optimal solution |
|
Space? # of nodes with g ≤ cost
of optimal solution, O(bceiling(C*/ ε)) |
|
Optimal? Yes – nodes expanded in
increasing order of g(n) |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Depth-first search
|
|
|
|
Expand deepest unexpanded node |
|
Implementation: |
|
fringe = LIFO queue, i.e., put
successors at front |
Properties of depth-first
search
|
|
|
|
|
Complete? No: fails in infinite-depth
spaces, spaces with loops |
|
Modify to avoid repeated states along
path |
|
à
complete in finite spaces |
|
Time? O(bm): terrible if m
is much larger than d |
|
but if solutions are dense, may be much
faster than breadth-first |
|
Space? O(bm), i.e., linear space! |
|
Optimal? No |
Depth-limited search
|
|
|
|
|
|
|
= depth-first search with depth limit l, |
|
i.e., nodes at depth l have no
successors |
|
|
|
Recursive implementation: |
Iterative deepening
search
Iterative deepening
search l =0
Iterative deepening
search l =1
Iterative deepening
search l =2
Iterative deepening
search l =3
Iterative deepening
search
|
|
|
|
Number of nodes generated in a
depth-limited search to depth d with branching factor b: |
|
NDLS = b0 + b1
+ b2 + … + bd-2 + bd-1 + bd |
|
|
|
Number of nodes generated in an
iterative deepening search to depth d with branching factor b: |
|
NIDS = (d+1)b0 +
d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1
+ 1bd |
|
|
|
For b = 10, d = 5, |
|
NDLS = 1 + 10 + 100 + 1,000
+ 10,000 + 100,000 = 111,111 |
|
NIDS = 6 + 50 + 400 + 3,000
+ 20,000 + 100,000 = 123,456 |
|
|
|
Overhead = (123,456 - 111,111)/111,111
= 11% |
Properties of iterative
deepening search
|
|
|
Complete? Yes |
|
Time? (d+1)b0 + d b1
+ (d-1)b2 + … + bd = O(bd) |
|
Space? O(bd) |
|
Optimal? Yes, if step cost = 1 |
Summary of algorithms
Repeated states
|
|
|
Failure to detect repeated states can
turn a linear problem into an exponential one! |
Graph search
Summary
|
|
|
|
|
|
|
Problem formulation usually requires
abstracting away real-world details to define a state space that can feasibly
be explored |
|
|
|
Variety of uninformed search strategies |
|
|
|
Iterative deepening search uses only
linear space and not much more time than other uninformed algorithms |