1
|
|
2
|
- Problem-solving agents
- Problem types
- Problem formulation
- Example problems
- Basic search algorithms
|
3
|
|
4
|
- On holiday in Romania; currently in Arad.
- Flight leaves tomorrow from Bucharest
- Formulate goal:
- Formulate problem:
- states: various cities
- actions: drive between cities
- Find solution:
- sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
|
5
|
|
6
|
- Deterministic, fully observable à single-state problem
- Agent knows exactly which state it will be in; solution is a sequence
- Non-observable à sensorless
problem (conformant problem)
- Agent may have no idea where it is; solution is a sequence
- Nondeterministic and/or partially observable à contingency problem
- percepts provide new information about current state
- often interleave} search, execution
- Unknown state space à exploration
problem
|
7
|
- Single-state, start in #5. Solution?
|
8
|
- Single-state, start in #5.
Solution? [Right, Suck]
- Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
|
9
|
- Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
- Contingency
- Nondeterministic: Suck may
dirty a clean carpet
- Partially observable: location, dirt at current location.
- Percept: [L, Clean], i.e., start in #5 or #7
Solution?
|
10
|
- Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
- Contingency
- Nondeterministic: Suck may
dirty a clean carpet
- Partially observable: location, dirt at current location.
- Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
|
11
|
- A problem is defined by four items:
- initial state e.g., "at Arad"
- actions or successor function S(x) = set of action–state pairs
- e.g., S(Arad) = {<Arad à Zerind, Zerind>, … }
- goal test, can be
- explicit, e.g., x = "at Bucharest"
- implicit, e.g., Checkmate(x)
- path cost (additive)
- e.g., sum of distances, number of actions executed, etc.
- c(x,a,y) is the step cost, assumed to be ≥ 0
- A solution is a sequence of actions leading from the initial state to a
goal state
|
12
|
- Real world is absurdly complex
- à state space must be abstracted
for problem solving
- (Abstract) state = set of real states
- (Abstract) action = complex combination of real actions
- e.g., "Arad à
Zerind" represents a complex set of possible routes, detours, rest
stops, etc.
- For guaranteed realizability, any real state "in Arad“ must get to some
real state "in Zerind"
- (Abstract) solution =
- set of real paths that are solutions in the real world
- Each abstract action should be "easier" than the original
problem
|
13
|
- states?
- actions?
- goal test?
- path cost?
|
14
|
- states? integer dirt and robot location
- actions? Left, Right, Suck
- goal test? no dirt at all locations
- path cost? 1 per action
|
15
|
- states?
- actions?
- goal test?
- path cost?
|
16
|
- states? locations of tiles
- actions? move blank left, right, up, down
- goal test? = goal state (given)
- path cost? 1 per move
- [Note: optimal solution of n-Puzzle family is NP-hard]
|
17
|
- states?: real-valued coordinates of robot joint angles parts of the
object to be assembled
- actions?: continuous motions of robot joints
- goal test?: complete assembly
- path cost?: time to execute
|
18
|
- Basic idea:
- offline, simulated exploration of state space by generating successors
of already-explored states (a.k.a.~expanding states)
|
19
|
|
20
|
|
21
|
|
22
|
|
23
|
- A state is a (representation of) a physical configuration
- A node is a data structure constituting part of a search tree includes state,
parent node, action, path cost g(x), depth
- The Expand function creates new nodes, filling in the various fields and
using the SuccessorFn of the problem to create the corresponding states.
|
24
|
- A search strategy is defined by picking the order of node expansion
- Strategies are evaluated along the following dimensions:
- completeness: does it always find a solution if one exists?
- time complexity: number of nodes generated
- space complexity: maximum number of nodes in memory
- optimality: does it always find a least-cost solution?
- Time and space complexity are measured in terms of
- b: maximum branching factor of the search tree
- d: depth of the least-cost solution
- m: maximum depth of the state space (may be ∞)
|
25
|
- Uninformed search strategies use only the information available in the
problem definition
- Breadth-first search
- Uniform-cost search
- Depth-first search
- Depth-limited search
- Iterative deepening search
|
26
|
- Expand shallowest unexpanded node
- Implementation:
- fringe is a FIFO queue, i.e., new successors go at end
|
27
|
- Expand shallowest unexpanded node
- Implementation:
- fringe is a FIFO queue, i.e., new successors go at end
|
28
|
- Expand shallowest unexpanded node
- Implementation:
- fringe is a FIFO queue, i.e., new successors go at end
|
29
|
- Expand shallowest unexpanded node
- Implementation:
- fringe is a FIFO queue, i.e., new successors go at end
|
30
|
- Complete? Yes (if b is finite)
- Time? 1+b+b2+b3+… +bd + b(bd-1)
= O(bd+1)
- Space? O(bd+1) (keeps every node in memory)
- Optimal? Yes (if cost = 1 per step)
- Space is the bigger problem (more than time)
|
31
|
- Expand least-cost unexpanded node
- Implementation:
- fringe = queue ordered by path cost
- Equivalent to breadth-first if step costs all equal
- Complete? Yes, if step cost ≥ ε
- Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/
ε)) where C* is the cost of
the optimal solution
- Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/
ε))
- Optimal? Yes – nodes expanded in increasing order of g(n)
|
32
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
33
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
34
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
35
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
36
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
37
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
38
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
39
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
40
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
41
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
42
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
43
|
- Expand deepest unexpanded node
- Implementation:
- fringe = LIFO queue, i.e., put successors at front
|
44
|
- Complete? No: fails in infinite-depth spaces, spaces with loops
- Modify to avoid repeated states along path
- à complete in finite
spaces
- Time? O(bm): terrible if m is much larger than d
- but if solutions are dense, may
be much faster than breadth-first
- Space? O(bm), i.e., linear space!
- Optimal? No
|
45
|
- = depth-first search with depth limit l,
- i.e., nodes at depth l have no successors
- Recursive implementation:
|
46
|
|
47
|
|
48
|
|
49
|
|
50
|
|
51
|
- Number of nodes generated in a depth-limited search to depth d with
branching factor b:
- NDLS = b0 + b1 + b2 + … + bd-2
+ bd-1 + bd
- Number of nodes generated in an iterative deepening search to depth d
with branching factor b:
- NIDS = (d+1)b0 + d b^1 + (d-1)b^2
+ … + 3bd-2 +2bd-1 + 1bd
- For b = 10, d = 5,
- NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
- NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
- Overhead = (123,456 - 111,111)/111,111 = 11%
|
52
|
- Complete? Yes
- Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd
= O(bd)
- Space? O(bd)
- Optimal? Yes, if step cost = 1
|
53
|
|
54
|
- Failure to detect repeated states can turn a linear problem into an
exponential one!
|
55
|
|
56
|
- Problem formulation usually requires abstracting away real-world details
to define a state space that can feasibly be explored
- Variety of uninformed search strategies
- Iterative deepening search uses only linear space and not much more time
than other uninformed algorithms
|