Notes
Slide Show
Outline
1
Solving problems by searching
  • Chapter 3
2
Outline
  • Problem-solving agents
  • Problem types
  • Problem formulation
  • Example problems
  • Basic search algorithms
3
Problem-solving agents
4
Example: Romania
  • On holiday in Romania; currently in Arad.
  • Flight leaves tomorrow from Bucharest
  • Formulate goal:
    • be in Bucharest
  • Formulate problem:
    • states: various cities
    • actions: drive between cities
  • Find solution:
    • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
5
Example: Romania
6
Problem types
  • Deterministic, fully observable à single-state problem
    • Agent knows exactly which state it will be in; solution is a sequence
  • Non-observable à sensorless problem (conformant problem)
    • Agent may have no idea where it is; solution is a sequence
  • Nondeterministic and/or partially observable à contingency problem
    • percepts provide new information about current state
    • often interleave} search, execution
  • Unknown state space à exploration problem
7
Example: vacuum world
  • Single-state, start in #5. Solution?
8
Example: vacuum world
  • Single-state, start in #5.
    Solution? [Right, Suck]


  • Sensorless, start in
    {1,2,3,4,5,6,7,8} e.g.,
    Right goes to {2,4,6,8}
    Solution?
9
Example: vacuum world
  • Sensorless, start in
    {1,2,3,4,5,6,7,8} e.g.,
    Right goes to {2,4,6,8}
    Solution?
    [Right,Suck,Left,Suck]


  • Contingency
    • Nondeterministic: Suck may
      dirty a clean carpet
    • Partially observable: location, dirt at current location.
    • Percept: [L, Clean], i.e., start in #5 or #7
      Solution?
10
Example: vacuum world
  • Sensorless, start in
    {1,2,3,4,5,6,7,8} e.g.,
    Right goes to {2,4,6,8}
    Solution?
    [Right,Suck,Left,Suck]


  • Contingency
    • Nondeterministic: Suck may
      dirty a clean carpet
    • Partially observable: location, dirt at current location.
    • Percept: [L, Clean], i.e., start in #5 or #7
      Solution? [Right, if dirt then Suck]
11
Single-state problem formulation
  • A problem is defined by four items:


  • initial state e.g., "at Arad"
  • actions or successor function S(x) = set of action–state pairs
    • e.g., S(Arad) = {<Arad à Zerind, Zerind>, … }
  • goal test, can be
    • explicit, e.g., x = "at Bucharest"
    • implicit, e.g., Checkmate(x)
  • path cost (additive)
    • e.g., sum of distances, number of actions executed, etc.
    • c(x,a,y) is the step cost, assumed to be ≥ 0

  • A solution is a sequence of actions leading from the initial state to a goal state
12
Selecting a state space
  • Real world is absurdly complex
    • à state space must be abstracted for problem solving
  • (Abstract) state = set of real states
  • (Abstract) action = complex combination of real actions
    • e.g., "Arad à Zerind" represents a complex set of possible routes, detours, rest stops, etc.
  • For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind"
  • (Abstract) solution =
    • set of real paths that are solutions in the real world
  • Each abstract action should be "easier" than the original problem
13
Vacuum world state space graph





  • states?
  • actions?
  • goal test?
  • path cost?
14
Vacuum world state space graph





  • states? integer dirt and robot location
  • actions? Left, Right, Suck
  • goal test? no dirt at all locations
  • path cost? 1 per action
15
Example: The 8-puzzle





  • states?
  • actions?
  • goal test?
  • path cost?
16
Example: The 8-puzzle





  • states? locations of tiles
  • actions? move blank left, right, up, down
  • goal test? = goal state (given)
  • path cost? 1 per move


  • [Note: optimal solution of n-Puzzle family is NP-hard]


17
Example: robotic assembly





  • states?: real-valued coordinates of robot joint angles parts of the object to be assembled
  • actions?: continuous motions of robot joints
  • goal test?: complete assembly
  • path cost?: time to execute
18
Tree search algorithms
  • Basic idea:
    • offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
19
Tree search example
20
Tree search example
21
Tree search example
22
Implementation: general tree search
23
Implementation: states vs. nodes
  • A state is a (representation of) a physical configuration
  • A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth







  • The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.
24
Search strategies
  • A search strategy is defined by picking the order of node expansion
  • Strategies are evaluated along the following dimensions:
    • completeness: does it always find a solution if one exists?
    • time complexity: number of nodes generated
    • space complexity: maximum number of nodes in memory
    • optimality: does it always find a least-cost solution?
  • Time and space complexity are measured in terms of
    • b: maximum branching factor of the search tree
    • d: depth of the least-cost solution
    • m: maximum depth of the state space (may be ∞)
25
Uninformed search strategies
  • Uninformed search strategies use only the information available in the problem definition
  • Breadth-first search
  • Uniform-cost search
  • Depth-first search
  • Depth-limited search
  • Iterative deepening search
26
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation:
    • fringe is a FIFO queue, i.e., new successors go at end
27
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation:
    • fringe is a FIFO queue, i.e., new successors go at end
28
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation:
    • fringe is a FIFO queue, i.e., new successors go at end
29
Breadth-first search
  • Expand shallowest unexpanded node
  • Implementation:
    • fringe is a FIFO queue, i.e., new successors go at end
30
Properties of breadth-first search
  • Complete? Yes (if b is finite)
  • Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
  • Space? O(bd+1) (keeps every node in memory)
  • Optimal? Yes (if cost = 1 per step)


  • Space is the bigger problem (more than time)
31
Uniform-cost search
  • Expand least-cost unexpanded node
  • Implementation:
    • fringe = queue ordered by path cost
  • Equivalent to breadth-first if step costs all equal
  • Complete? Yes, if step cost ≥ ε
  • Time? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
  • Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
  • Optimal? Yes – nodes expanded in increasing order of g(n)
32
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
33
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
34
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
35
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
36
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
37
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
38
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
39
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
40
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
41
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
42
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
43
Depth-first search
  • Expand deepest unexpanded node
  • Implementation:
    • fringe = LIFO queue, i.e., put successors at front
44
Properties of depth-first search
  • Complete? No: fails in infinite-depth spaces, spaces with loops
    • Modify to avoid repeated states along path
      • à complete in finite spaces
  • Time? O(bm): terrible if m is much larger than d
    •  but if solutions are dense, may be much faster than breadth-first
  • Space? O(bm), i.e., linear space!
  • Optimal? No
45
Depth-limited search
  • = depth-first search with depth limit l,
  • i.e., nodes at depth l have no successors


  • Recursive implementation:
46
Iterative deepening search
47
Iterative deepening search l =0
48
Iterative deepening search l =1
49
Iterative deepening search l =2
50
Iterative deepening search l =3
51
Iterative deepening search
  • Number of nodes generated in a depth-limited search to depth d with branching factor b:
  • NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd


  • Number of nodes generated in an iterative deepening search to depth d with branching factor b:
  • NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd


  • For b = 10, d = 5,
    • NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
    • NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456

  • Overhead = (123,456 - 111,111)/111,111 = 11%
52
Properties of iterative deepening search
  • Complete? Yes
  • Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
  • Space? O(bd)
  • Optimal? Yes, if step cost = 1
53
Summary of algorithms
54
Repeated states
  • Failure to detect repeated states can turn a linear problem into an exponential one!
55
Graph search
56
Summary
  • Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored


  • Variety of uninformed search strategies


  • Iterative deepening search uses only linear space and not much more time than other uninformed algorithms