| Chapter 4 addendum | |
| (On-line search and Homework #1 discussion) |
| Many problems are offline | ||
| Do search for action and then perform action | ||
| Online search interleave search & execution | ||
| Necessary for exploration problems | ||
| New observations only possible after acting | ||
| Actual cost of path | |
| Best possible cost | |
| (if agent knew space in advance) | |
| 30/20 = 1.5 | |
| For cost, lower is better | |
| Exploration problems: agent physically in some part of the state space. | |||
| e.g. solving a maze using an agent with local wall sensors | |||
| Sensible to expand states easily accessible to agent (i.e. local states) | |||
| Local search algorithms apply (e.g.,
hill-climbing) |
|||
| What heuristics can be used? | ||
| What type of search algorithm makes sense? | ||
| No explicit goal state, just an implicit one | ||
| There are many levels of sophistication in doing this assignment! | ||
| A solution that only tries to extend the pipeline will get a minimal score | ||
| A solution that uses search techniques over the queue will have at least an average score | ||
| A solution that combines this with the use of heuristics will do better. | ||
| A small but significant portion of your grade will be competitively assigned. | ||