Nilsson's New Synthesis
Deepak Kumar
Department of Mathematics & Computer Science
Bryn Mawr College
dkumar@brynmawr.edu

Curriculum Descant
From ACM Intelligence Magazine
Volume 9, Number 3&4, Winter 1998
ACM Press

 

In this installment, I would like to draw your attention to Nils Nilsson's new book, Artificial Intelligence: A New Synthesis (Morgan Kauffman Publishers, Inc. 1998, ISBN 1-55860-467-7). Here is the blurb, describing the book, from its back cover:

Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI.

While it is not necessarily the intenion of this column to provide book reviews, I feel that the new synthesis offered by Nilsson is worth examining, especially in what it lends to a coherent approach to learning the seemingly disparate topics of AI. Several years ago, I remember Nilsson saying,

Just as Los Angeles has been called "twelve suburbs in search of a city," AI might be called "twelve topics in search of a subject."

So, how does one synthesize Search, Representation, Reasoning, Vision, Planning, Leanrning, Uncertainty, Natural Language, Robotics, Game Playing, Expert Systems, and Lisp/Prolog into a coherent one-semester offering for beginners? This is not the first time a synthesis has been attempted. Russell & Norvig's extensive text takes an agent-oriented approach. It is now considered a classic and currently undergoing evolution into a second edition. In a straw poll, of about 80 AI instructors, opinions were gathered about what they considered the "core" fields of AI that should be included in an introductory course. The responses, categorized themselevs into two classes, defined by the nature of the course itself. In programs where the focus was to get students into industry jobs, the faculty suggested the following topics: Knowledge Representation, Search, Rule-based systems, Planning/Scheduling, Neural Networks, and Fuzzy Logic. On the other hand, the consensus among faculty at other universities was to include: Knowledge Representation, Logic, Search, Game Playing, Reasoning with Uncertainty, Learning, and Natural Language Understanding. Depending upon the focus and the specific topics chosen, one is still confronted with coming up with a coherent story that would tie-in all the topics. Then there is the issue of the amount of programming, and the choice of programming language.

In his new text, Nilsson takes an "evolutionary" approach. He presents ideas and topics in the context of synthesizing progressively more complex and competitive agents starting with simple stimulus-response agents to agents that plan, to agents that reason, and to agents that communicate. Along the way, one encounters all of the twelve topics mentioned above. You will find topics embedded in the most unexpected places due to the new systhesis. For example, you will learn about production systems in the context of stimulus-response agents. And, production systems form a natural lead into threshold logic units with lead directly into neural networks. Thus, one finds studying neural networks in the third chapter of the book.

Similar surprises, for those already familiar with concepts in AI, are to be discovered throught the book. Another refreshing aspect of the book is its treatment of Learning. Learning is not necessariliy treated as a separate topic, rather it is embedded throughout the book. Hence, the discussion of neural networks in the context of reactive agents: agents that learn to respond to stimulii. This approach to presenting the topic of learning is refreshing and provides the reader with a good context to appreciate the concepts.

The book tries to balance theory with practice, concentrating more on the core ideas within each topic. Where necessary, formalisms are presented (with the clarity and crispness we are familiar with in Nilsson's earlier books), supported by proofs, as well as algorithms for implementation. One will not find any programs or tutorials to any specific programming paradigm or language. Depending upon one's own AI background and biases, one may desire more depth on some of the topics. However, the book is designed for a fifteen-week semester. It succeeds in providing a crisp, comprehensive, and up-to-date presentation of the core ideas in AI without bulking into an encyclopedic volume.

There are WWW resources provided by the publisher for instructors wishing to use the book in their courses. These include copies of all figures, solutions to exercises, and an online discussion forum. For specific courses currently using this book, see the following URLs:

at Bryn Mawr College:
http://mainline.brynmawr.edu/Courses/cs372/fall98

at Swarthmore College:
http://www.cs.swarthmore.edu/~meeden/ai/fall98.html

Descants

Fall 1997
Welcome
Inaugural Installment of the new column.
(Deepak Kumar)

Summer 1998
Teaching about Embedded Agents
Using small robots in AI Courses
(Deepak Kumar)

Fall 1998
Robot Competitions as Class Projects
A report of the 1998 AAAI Robot Competition and how robot competitions have been successfully incorporated in the curriculum at Swarthmore College and The University of Arkansas
(
Lisa Meeden & Doug Blank)

Winter 1998
Nilsson's New Synthesis
A review of Nils Nilsson's new AI textbook
(Deepak Kumar)

Spring 1999
Pedagogical Dimensions of Game Playing
The role of a game playing programming exercise in an AI course
(Deepak Kumar)

Summer 1999
A New Life for AI Artifacts
A call for the use of AI research software in AI courses
(Deepak Kumar)

Fall 1999
Beyond Introductory AI
The possibility of advanced AI courses in the undergraduate curriculum
(Deepak Kumar)

January 2000
The AI Education Repository
A look back at AAAI's Fall 1994 Symposium on Improving the Instruction of Introductory AI and the resulting educational repository
(Deepak Kumar)

Spring 2000
Interdisciplinary AI
A challenge to AI instructors for designing a truly interdisciplinary AI course
(Richard Wyatt)

Summer 2000
Teaching "New AI"
Authors of a new text (and a new take) on AI present their case
(Rolf Pfeifer)

Fall 2000
Ethical and Social Implications of AI: Stories and Plays
Descriptions of thought provoking stories and plays that raise ethical and social issues concerning the use of AI
(Richard Epstein)

January 2001
How much programming? What kind?
A discussion on the kinds of programming exercises in AI courses
(Deepak Kumar)

Spring 2001
Predisciplinary AI
A follow-up to Richard Wyatt's column (above) and a proposal for a freshman-level course on AI
(Deepak Kumar)

Spring 2001
Machine Learning for the Masses
Machine Learning comes of age in undergraduate AI courses
(Clare Congdon)


About Curriculum Descant
Curriculum Descant has been a regular column in ACM's Intelligence magazine (formerly published as ACM SIGART's Bulletin). The column is edited by Deepak Kumar. The column features short essays on any topic relating to the teaching of AI from any one willing to contribute. If you would like to contribute an essay, please contact Deepak Kumar.