Open only for Humans Droids and Robots should

Open only for Humans; Droids and Robots should go for CSE 462 next door ; -)

General Information • Instructor: Subbarao Kambhampati (Rao) – Office hours: , T/Th 1 -2 pm By 560 • TA(s): – Will Cushing (office hours TBD) • Kartik Talamadupula • Tuan Nguyen – All Ph. D. students in AI; first two ASU undergrads • Textbook: Russell & Norvig 3 rd edition – You won’t be terribly disadvantaged if you get a used second edition instead. • Course Homepage: http: //rakaposhi. eas. asu. edu/cse 471

) or n i m to ( t c e j Sub nges Cha Grading etc. – Projects/Homeworks/Participation (~55%) • Projects – Approximately 4 » First project already up! Due 1/17 – Expected background » Competence in Lisp programming » Why lisp? (Because!) • Homeworks – Homeworks will be assigned piecemeal. . (Socket system) • Participation – Attendance to and attentiveness in classes is mandatory – Participation on class blog is highly encouraged. – Do ask questions – Midterm & final (~45%)

Lisp Programming • Use Lisp-in-a-box (link from the class page) – Easy to install and use. Take the clisp version • There are links to 2 lisp refresher lectures by me – Also links to free lisp books • You are allowed to use other languages such as Java/Python/C etc. —but the partial code snippets will only be provided for Lisp – If you plan to take this option, please do talk to the instructor

It has not been the path for the faint-hearted, for those who prefer leisure over work, or seek only the pleasures of riches and fame. -Obama inadvertently talking about CSE 471 in his inaugural address Course demands. . • . . your undivided attention – Attendance mandatory; if you have to miss a class, you should let me know before hand • Has been repeatedly seen as a 4 -5 credit course – (while the instructor just thinks your other courses are 1 -2 credit ones ) – No apologies made for setting highexpectations

Grade Anxiety • All letter grades will be awarded – A+, A, B+, B, B-, C+, C, D etc. • No pre-set grade thresholds • CSE 471 and CSE 598 students will have the same assignments/tests etc. During letter grade assignment however, they will be compared to their own group. – The class is currently ~30 CSE 471 and ~15 CSE 598 (grad) students

Honor Code • Unless explicitly stated otherwise, all assignments are: – Strictly individual effort – You are forbidden from trawling the web for answers/code etc • Any infraction will be dealt with in severest terms allowed.

Life with a homepage. . • I will not be giving any handouts – All class related material will be accessible from the web-page • Home works may be specified incrementally – (one problem at a time) – The slides used in the lecture will be available on the class page (along with Audio of the lecture) • I reserve the right to modify slides right up to the time of the class • When printing slides avoid printing the hidden slides



1946: ENIAC heralds the dawn of Computing

1950: Turing asks the question…. I propose to consider the question: “Can machines think? ” --Alan Turing, 1950

1956: A new field is born G G We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. - Dartmouth AI Project Proposal; J. Mc. Carthy et al. ; Aug. 31, 1955.
![1996: EQP proves that Robbin’s Algebras are all boolean [An Argonne lab program] has 1996: EQP proves that Robbin’s Algebras are all boolean [An Argonne lab program] has](http://slidetodoc.com/presentation_image_h2/cbf1c962ca8ccea4cb8a5e13b9864bac/image-14.jpg)
1996: EQP proves that Robbin’s Algebras are all boolean [An Argonne lab program] has come up with a major mathematical proof that would have been called creative if a human had thought of it. -New York Times, December, 1996

1997: HAL 9000 becomes operational in fictional Urbana, Illinois …by now, every intelligent person knew that H-A-L is derived from Heuristic ALgorithmic -Dr. Chandra, 2010: Odyssey Two

1997: Deep Blue ends Human Supremacy in Chess vs. I could feel human-level intelligence across the room -Gary Kasparov, World Chess Champion (human) In a few years, even a single victory in a long series of games would be the triumph of human genius.

1999: Remote Agent takes Deep Space 1 on a galactic ride For two days in May, 1999, an AI Program called Remote Agent autonomously ran Deep Space 1 (some 60, 000 miles from earth)

2002: Computers start passing Advanced Placement Tests … a project funded by (Microsoft Co-founder) Paul Allen attempts to design a “Digital Aristotle”. Its first results involve programs that can pass High School Advanced Placement Exam in Chemistry…

2005: Cars Drive Themselves G Stanley and three other cars drive themselves over a 132 mile mountain road

2005: Robots play soccer (without headbutting!) G 2005 Robot Soccer: Humanoid league

2006: AI Celebrates its Golden Jubilee…

2007: Robots Drive on Urban Roads G 11 cars drove themselves on urban streets (for DARPA Urban Challenge)

2010: Watson defeats Puny Humans in Jeopardy! And Ken Jennings pledges obeisance to the new Computer Overlords. .

2012: Robots (instead of them foreigners) Threaten to Take all your jobs . . and thankfully You step in to thwart them by taking CSE 471 Welcome to the Holy War!

Course Overview • • What is AI – Intelligent Agents Search (Problem Solving Agents) – Single agent search [Project 1] • Markov Decision Processes • • • Constraint Satisfaction Problems – Adversarial (multi-agent) search Logical Reasoning [Project 2] Reasoning with uncertainity Planning [Project 3] Learning [Project 4]

Although we will see that all four views have motivations. .

Do we want a machine that beats humans in chess or a machine that thinks like humans while beating humans in chess? Deep. Blue supposedly DOESN’T think like humans. . (But what if the machine is trying to “tutor” humans about how to do things? ) (Bi-directional flow between thinking humanly and thinking rationally)


What if we are writing intelligent agents that interact with humans? The COG project The Robotic care givers Mechanical flight became possible only when people decided to stop emulating birds…



Playing an (entertaining) game of Soccer Solving NYT crossword puzzles at close to expert level Navigating in deep space Learning patterns in databases (datamining…) Supporting supply-chain management decisions at fortune-500 companies Learning common sense from the web Navigating desert roads Navigating urban roads Bluffing humans in Poker Beating them in Jeopardy…

What AI can do is as important as what it can’t yet do. . • Captcha project

Arms race to defeat Captchas… (using unwitting masses) • Start opening an email account at Yahoo. . • Clip the captcha test • Show it to a human trying to get into another site – Usually a site that has pretty pictures of the persons of apposite* sex • Transfer their answer to the Yahoo Note: Apposite—not opposite. This course is nothing if not open minded

1/10 Lisp Recitation Lecture? How many of you installed Lisp-in-a-box? TA office hours will be sent by email Register for blog! Thinking Cap to be released today


It can be argued that all the faculties needed to pass Turing test are also needed to act rationally to improve success ratio…

Architectures for Intelligent Agents Wherein we discuss why do we need representation, reasoning and learning

Environment What action next? A: A Unified Brand-name-Free Introduction to Planning Th e u Q $ $ $$ n se tio Subbarao Kambhampati


and prior knowledge Rational != Intentionally avoiding sensing “history” = {s 0, s 1, s 2……sn…. } Performance = f(history) Expected Performance= E(f(history))



Partial contents of sources as found by Get, Post, Buy, . . Cheapest price on specific goods Internet, congestion, traffic, multiple sources Qn: How do these affect the complexity of the problem the rational agent faces? Lack of percepts makes performance harder Lack of actions makes performance harder… Complex goals make performance harder How about the environment?

(Static vs. Dynamic) (Observable vs. Partially Observable) Goals n tio (Full vs. Partial satisfaction) ac (perfect vs. Imperfect) perception Environment (Instantaneous vs. Durative) (Deterministic vs. Stochastic) What action next? A: A Unified Brand-name-Free Introduction to Planning Th e u Q $ $ $$ n se tio Subbarao Kambhampati

#Agents Yes No Yes #1 No No No >1 Accessible: The agent can “sense” its environment best: Fully accessible worst: inaccessible typical: Partially accessible Deterministic: The actions have predictable effects best: deterministic worst: non-deterministic typical: Stochastic Static: The world evolves only because of agents’ actions best: static worst: dynamic typical: quasi-static Episodic: The performance of the agent is determined episodically best: episodic worst: non-episodic Discrete: The environment evolves through a discrete set of states best: discrete worst: continuous typical: hybrid Agents: # of agents in the environment; are they competing or cooperating?

Ways to handle: Assume that the environment is more benign than it really is (and hope to recover from the inevitable failures…) Assume determinism when it is stochastic; Assume static even though it is dynamic; Bite the bullet and model the complexity

(Model-based reflex agents) How do we write agent programs for these?

s ed e n al v i urv on. . s sic mati a n b nfor e Ev te i sta This one already assumes that the “sensors features” mapping has been done!

(aka Model-based Reflex Agents) te on Sta imati Est EXPLICIT MODELS OF THE ENVIRONMENT --Blackbox models --Factored models Logical models Probabilistic models

te on Sta imati Est ch/ g r a Se nnin Pla It is not always obvious what action to do now given a set of goals You woke up in the morning. You want to attend a class. What should your action be? Search (Find a path from the current state to goal state; execute the first op) Planning (does the same for structured—non-blackbox state models)

Course Overview • • What is AI – Intelligent Agents Search (Problem Solving Agents) – Single agent search [Project 1] • Markov Decision Processes • • • Constraint Satisfaction Problems – Adversarial (multi-agent) search Logical Reasoning [Project 2] Reasoning with uncertainity Planning [Project 3] Learning [Project 4]

Representation Mechanisms: Logic (propositional; first order) Probabilistic logic Learning the models How the course topics stack up… Search Blind, Informed Planning Inference Logical resolution Bayesian inference

. . certain inalienable rights—life, liberty and pursuit of ? Money ? Daytime TV ? Happiness (utility) --Decision Theoretic Planning --Sequential Decision Problems

Learning Dimensions: What can be learned? --Any of the boxes representing the agent’s knowledge --action description, effect probabilities, causal relations in the world (and the probabilities of causation), utility models (sort of through credit assignment), sensor data interpretation models What feedback is available? --Supervised, unsupervised, “reinforcement” learning --Credit assignment problem What prior knowledge is available? -- “Tabularasa” (agent’s head is a blank slate) or pre-existing knowledge
- Slides: 55