Notes
Slide Show
Outline
1
Foundations of Artificial Intelligence
  • Revision
2
Final Exam
  • Venue: PGP General Purpose Room
  • Date: 23 April (Friday)
  • Time: 2:30 – 4:30 pm
3
Format
  • One A4 sized sheet allowed to the test
  • Eight questions, emphasizing material covered after the midterm
    • Yes, all material in the course will be covered on the exam
4
No class next week
  • Today is the final lecture for the course


  • You had your “extra” lecture in webcast as the vision, NLP or robotics advanced topics lecture
5
Outline
  • Agents
  • Search
    • Uninformed Search
    • Informed Search
  • Adversarial Search
  • Constraint Satisfaction
  • Knowledge-Based Agents
  • Uncertainty and Learning
6
Agent types
  • Four basic types in order of increasing generality:
  • Simple reflex agents
  • Model-based reflex agents
  • Goal-based agents
  • Utility-based agents
7
Simple reflex agents
8
Model-based reflex agents
9
Goal-based agents
10
Utility-based agents
11
Creating agents
  • Where does the intelligence come from?
  • Coded by the designers
    • Knowledge representation – predicate and first order logic
  • Learned by the machine
    • Machine learning – expose naïve agent to examples to learn useful actions


12
Learning agents
13
Searching for solutions
  • In most agent architectures, deciding what action to take involves considering alternatives


  • Searching is judged on optimality, completeness and complexity


  • Do I have a way of gauging how close I am to a goal?
    • No:
    • Yes:
14
Uninformed search
  • Formulate the problem, search and then execute actions


  • Apply Tree-Search


  • For environments that are
    • Deterministic
    • Fully observable
    • Static
15
Tree search algorithm
  • Basic idea:
    • offline, simulated exploration of state space by generating successors of already-explored states
16
Summary of algorithms
  • Breadth-First – FIFO order
  • Uniform-Cost – in order of cost
  • Depth-First – LIFO order
  • Depth-Limited – DFS to a maximum depth
  • Iterative Deepening – Iterative DLS.


  • Bidirectional – also search from goal towards origin
17
Repeated states: Graph-Search
18
Informed search
  • Heuristic function h(n) =
    cost of the cheapest path from n to goal.


  • Greedy Best First Search
    • Minimizing estimated cost to goal
  • A* Search
    • Minimizing total cost
19
Properties of heuristic functions
  • Admissible: never overestimates cost
  • Consistent: estimated cost from node n+1 is    than cost from node n + step cost.


  • A* using Tree-Search is optimal if the heuristic used is
    • Graph-Search needs an consistent heuristic.  Why?

20
Local search
  • Good for solutions where the path to the solution doesn’t matter
    • Often work on
    • Don’t search systematically
    • Often require very little memory


  • Correlated to online search
    • Have only access to the local state
21
Local search algorithms
  • Hill climbing search – choose best successor
  • Beam search – take the best k successor
  • Simulated annealing – allow backward moves during beginning steps
  • Genetic algorithm – breed k successors using crossover and mutation
22
Searching in specialized scenarios
  • Properties of the problem often allow us to formulate
    • Better heuristics
    • Better search strategy and pruning

  • Adversarial search
    • Working against an opponent
  • Constraint satisfaction problem
    • Assigning values to variables
    • Path to solution doesn’t matter
    • View this as an incremental search
23
Adversarial Search
  • Turn-taking, two-player, zero-sum games
  • Minimax algorithm:
    • One ply:
    • Max nodes: agent’s move, maximize utility
    • Min nodes: opponent’s move, minimize utility
    • Alpha-Beta pruning: rid unnecessary computation.
24
Constraint Satisfaction
  • Discrete or continuous solutions
    • Discretize and limit possible values


  • Modeled as a constraint graph


  • As the path to the solution doesn’t matter, local search can be very useful.
25
Techniques in CSPs
  • Basic: backtracking search
    • DFS for CSP
    • A leaf node (at depth v) is a solution


  • Speed ups
    • Choosing variables
      • Minimum remaining values
      • Most constrained variable / degree
    • Choosing values
      • Least constraining value
26
Pruning CSP search space
  • Before expanding node, can prune the search space
  • Forward checking
    • Pruning values from remaining variables
  • Arc consistency
    • Propagating stronger levels of consistency
    • E.g., AC-3 (applicable before searching and
      during search)

  • Balancing arc consistency with actual searching.
27
Propositional and First Order Logic
  • Propositional Logic
    • Facts are true or false


  • First Order Logic
    • Relationships and properties of objects
    • More expressive and succinct
      • Quantifiers, functions
      • Equality operator
    • Can convert back to prop logic to do
28
Inference in logic
  • Given a KB, what can be inferred?
    • Query- or goal-driven
      • Backward chaining, model checking (e.g. DPLL), resolution


    • Deducing new facts
      • Forward chaining
        • Efficiency: track # of literals of premise using a count or Rete networks
29
Inference in logic
  • Chaining
    • Requires
    • Uses Modus Ponens for sound reasoning
    • Forward or Backward types


  • Resolution
    • Requires Conjunctive Normal Form
    • Uses Resolution for sound reasoning
    • Proof by Contradiction
30
Inference in FOL
  • Don’t have to propositionalize
    • Could lead to infinite sentences functions


  • Use unification instead
    • Standardizing apart
    • Dropping quantifiers
      • Skolem constants and functions

  • Inference is semidecidable
    • Can say yes to entailed sentences, but non-entailed sentences will never terminate
31
Connection to knowledge-based agents
  • CSP can be formulated as logic problems and vice versa
  • CSP search as model checking





  • Local search: WalkSAT with min-conflict heuristic
32
Inference and CSPs
  • Solving a CSP via inference
    • Handles special constraints (e.g., AllDiff)
    • Can learn new constraints not expressed by KB designer
  • Solving inference via CSP
    • Whether a query is true under all possible constraints (satisfiable)


  • Melding the two: Constraint Logic Programming (CLP)


33
Uncertainty
  • Leads us to use probabilistic agents
    • Only one of many possible methods!


  • Modeled in terms of random variables
    • Again, we examined only the discrete case


  • Answer questions based on full joint distribution


34
Inference by enumeration
  • Interested in the posterior joint distribution of query variables given specific values for evidence variables
    • Summing over hidden variables
    • Cons: Exponential complexity


  • Look for absolute and conditional independence to reduce complexity
35
Bayesian networks
  • One way to model dependencies
  • Variable’s probability only depends on its parents
  • Use product rule and conditional dependence to calculate joint probabilities
  • Easiest to structure causally
    • From root causes forward
    • Leads to easier modeling and lower complexity

36
Learning
  • Inductive learning - based on past examples
  • Learn a function h() that approximates real function f(x) on examples x


  • Balance complexity of hypothesis with fidelity to the examples
    • Minimize α E(h,D)  + (1-α) C(h)



37
Learning Algorithms
  • Many out there but the basics are:
  • K nearest neighbors
    • Instance-based
    • Ignores global information
  • Naïve Bayes
    • Strong independence assumption
    • Scales  well due to assumptions
    • Needs normalization when dealing with unseen feature values
  • Decision Trees
    • Easy to understand its hypothesis
    • Decides feature based on information gain
38
Training and testing
  • Judge induced h()’s quality by using a test set
  • Training and test set must be separate; otherwise peeking occurs
  • Modeling noise or specifics of the training data can lead to overfitting
    • Use pruning to remove parts of the hypothesis that aren’t justifiable
39
Where to go
 from here?
  • Just the tip of the iceberg


  • Many advanced topics
    • Introduced only a few
    • Textbook can help in exploration of AI
40
That’s it
  • Thanks for your attention over the semester
  • See you in April!