CS372/PHIL372
Artificial Intelligence
Overview
Lectures
- January 23
- Topics: AI background, The Science of AI, AI Philosophy
- Reading: Chapter 1, Chapter 2 you are not expect to read this chapter, Chapter 26
- Lecture Notes
- January 30
- Topic: Search
- Reading: Chapter 3, Chapter 4
- Lecture Notes
- Feb 6
- Topic: Adversarial Search (Game Playing), Intro to Prolog (time permitting).
- Reading: Chapter 6, Backgammon
- Lecture Notes
- Feb 13
- Topic: Logic
- Reading: Chapters 7-8
- Lecture Notes Prolog tutorial
- Feb 20
- Topic: More Logic
- Reading: Chapter 9, soar (We will discuss this paper, but I do not expect that you will have read it thoroughly)
- Lecture Notes
- Feb 27
- Topic: Natual Language Understanding (and some more prolog)
- Reading: Chapter 22. The lecture will also draw from the following papers, but I do not really expect you to read them.
- Lecture Notes
- March 6
- Topic: Intro to Learning
- Reading: Chapter 18 sections 1-3, Decision Forests This paper will motivate part of the discussion, but I will not expect you to have read it.
- Lecture Notes
- March 20
- Topic: More Learning
- Reading: the remainder of Chapter 18. Also I will discuss some of a paper by Ting et al although I will not expect you to have read it.
- Lecture Notes
- March 27
- Topic: Background knowledge in learning
- Reading: Chapter 19.1. Also, we will discuss the following papers (but I will not expect you to have read them:
- Rivest's paper on PAC and learning decision lists
- I will entertain any and all question you have about the material covered so far in class. If this takes the entire class, that is fine. I encourage you to come prepared with questions.
- Lecture Notes
- April 3
- NO CLASS -- do the midterm
- April 10
- Topic: Acive example selection, Instance based algorithms and neural networks
- Chapter 20.4 and 20.5 and Lewis and Catlett's paper on uncertainty sampling. However, I will not expect that you have read this paper.
- Lecture Notes
- April 17
- Topic: Neural networks and support vector machines
- Chapter 20.5 and 20.6. I also drew information from 4 or 5 other papers, but I do not even list them.
- Lecture Notes
- April 24
- Topic: Bayeaean approaches to uncertainty; the EM algorithm
- Lecture Notes
- May 1
- Topic: Game Tournament & course summary
- No lecture, so no notes. However, click here to get my clobber player (class files only). To run my player, get the jar file and then "java -cp geoff_clobber.jar Clobber5". Be warned, my game has only a cruddy text interface.
Note: everything in this schedule more than 7 days in the future is subject to change based on my perception of class interest. Also, holidays and vacations may intervene. Midterm will be the week of April 3, when there will be no class because I will be out of town.
Assignments
- Feb 1
- The 8 puzzle.
- A full credit set of answers to the analysis section.
- Feb 11 -- DATE CHANGE
- The revenge of the 8 puzzle
My solutions for:
All of these programs take 3 arguements: a target file, a start file, the matrix size. The matrix size is a integer like 3 or 4.
- Feb 25 (Note date change)
- Getting started with Prolog
- March 6
- More Prolog
- To help (or hinder) here is a collection of small prolog programs
- March 30
- A prolog generator of english sentences This assignment will be worth 150% of the value of the previous assignments.
- May 1
- The Game Clobber This assignment will be worth 200% percent of the value of the first 4 assignments. While the base set of directions will not change, I will be enhancing the directions with mode details of the tournament rules, etc. The bases task will reamin the same. Build a good clobber player.
Written Responses
- For Jan 30
-
- The "No Free Lunch" Theorem (Wolpert and Macready, 1995) , states roughly that for any learning program there is a large class of problems that it will be unable to solve. Morover, this class of problems can be solved by some other learning program. Hence, NFL seems to imply that AI is impossible? Comment.
- Deep Blue, the program that beat Kasparov at chess, used (roughly) iterative deepening without pruning to search for moves. Some have argued that this program represents a pinnacle for AI, others that it is not AI at all. Choose a position and defend it.
- one pair of responses
- For Feb 6
-
- Invent a heuristic for the 8-puzzle that is not admissable. Describe why it can lead to a suboptimal solution to a particular 8-puzzle (hint, consider n 8 puzzle that is really easy to solve)
- Describe a some relativly common 2 player game that is not zero sum.
- a pair of responses
- For Feb 13
-
- The game "Connect 4" can be solved exactly. Yet, people still work on developing the "optimal" heuristics. Why would anyone do this?
- Are there problems that canot be solved using propositional logic? If yes, describe one. If no, why not?
- a pair of responses
- For Feb 20
-
- Is Cyc a dead-end for AI? Why or why not.
- Describe the difference between forward and backward chaining and situationsin life where you have used each.
- Cyc response a chaining response
- For Feb 27
-
- Why might you choose to use a forward rather than a backward chaining logic system (or the converse)?
- The problem of Natural Language Understanding has often been linked to the "Symbol Grounding Problem". (See the Wikipedia article for an overview.) Do you agree that the two are linked? Why?
- Departing from my norm, I give two responses to the second question. One arguing in favor and one arguing against
- For March 6
-
- Using only words, describe a pengiun so that a person (with a short attention span) who has never seen one could recognise it.
- Draw a decision tree that describes you actions with respect to some mundane daily activity. For instance, and you cannot use my example, picking food in the cafeteria. If you prefer, you may use pen, ink and (ugh) paper.
- Penguin descriptions I include two descriptions. However, I defy anyone to draw a penguin-like thing based only on these descriptions.
- For March 20
-
- One of the things that decision trees are supposed to be good at it making human comprehensible decision functions. Using one of the web sites from last week's lecutre notes, build a decision tree on an included dataset and then comment on the comprehensibility of the tree.
- Define the words "probably" and "approximately" in the phrase "probably approximately correct".
-
- For March 27
- No responses. Come to class with questions about the impending midterm.
For April 3
No responses. Do the midterm
For April 10
No responses. Hand in the midterm
For April 17
- Draw a neural network with a set of weights such that it can correctly compute the following table:
Input 1 | Input 2 | Output
|
---|
1 | 1 | 0
|
1 | 0 | 1
|
0 | 1 | 1
|
0 | 0 | 0
|
Your neural network should have 2 input units, 2 hidden units, 1 output unit, weighted links from each input unit to each hidden unit and weighted links from each hidden unit to the output unit. So there are 5 units and 6 links. The network should use linear threshold units (step functions). In addition to specifying the weight on each unit you will need to specify the point at which each unit changes from 0 to 1 (the threshold of the unit). (Hint: this can be done with only integer weights on the links, you will need some negatively weighted links).
- Describe the analogy to the brain which results in people calling neural networks "neural networks".
For April 24
- Support Vector Machines work by taking a problem and recasting it into some other space in which it is more easily solved. The No Free Lunch theorem says that this approach cannot always work. Describe a problem on which you would expect SVM to either fail, or at least perform poorly.
- Bayes Law is simply Pr(A|B)=(Pr(B|A)*Pr(A)) / Pr(B). Why is this even useful?
- More generally, problems which do not have a linearly separable projection will prove difficult for SVMs. For example, a checkboard would be easy for a nearest neigghnor algorithm but should be difficult for a SVM.