CSC 423 ARTIFICIAL INTELLIGENCE Introduction Introduction cont College

  • Slides: 32
Download presentation
CSC 423 ARTIFICIAL INTELLIGENCE Introduction

CSC 423 ARTIFICIAL INTELLIGENCE Introduction

Introduction (cont) • College: – e-mail: theo_christopher@hotmail. com – web address: www. ctleuro. ac.

Introduction (cont) • College: – e-mail: theo_christopher@hotmail. com – web address: www. ctleuro. ac. cy • Personal: – web address: www. theodoroschristophides. yolasite. com

Introduction (cont) • • Syllabus Books Library Lab Attendance & admittance to class Exams

Introduction (cont) • • Syllabus Books Library Lab Attendance & admittance to class Exams and tests Classroom behavior

Definition of Artificial Intelligence, or AI for short, is a combination of computer science,

Definition of Artificial Intelligence, or AI for short, is a combination of computer science, physiology, and philosophy. AI is a broad topic, consisting of different fields, from machine vision to expert systems. The element that the fields of AI have in common is the creation of machines that can "think". In other words is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. 4

An Introduction to Artificial Intelligence (Cont) To what degree does intelligence consist of, for

An Introduction to Artificial Intelligence (Cont) To what degree does intelligence consist of, for example, solving complex problems, or making generalizations and relationships? And what about perception and comprehension? Research into the areas of learning, of language, and of sensory perception have aided scientists in building intelligent machines. One of the most challenging approaches facing experts is building systems that mimic the behavior of the human brain, made up of billions of neurons, and arguably the most complex matter in the universe. Perhaps the best way to gauge the intelligence of a machine is British computer scientist Alan Turing’s test. He stated that a computer would deserves to be called intelligent if it could deceive a human into believing that it was human. 5

Turing Test • English mathematician Alan Turing proposed in 1950 the following criterion for

Turing Test • English mathematician Alan Turing proposed in 1950 the following criterion for the intelligence of a machine: a human interrogator cannot differentiate whether s/he is communicating with another human or a computer using text messages. • An example of a test of acting human-like • In the so-called total Turing test the machine also has to be able to observe and manipulate its physical environment • Time-limited Turing test competitions are organized annually • The best performance against knowledgeable organizers is recorded by programs that try to fool the interrogator • Human experts have the highest probability of being judged as nonhumans 6

Importance of Artificial Intelligence • Artificial Intelligence has come a long way from its

Importance of Artificial Intelligence • Artificial Intelligence has come a long way from its early roots, driven by dedicated researchers. The beginnings of AI reach back before electronics, to philosophers and mathematicians such as Boole and others theorizing on principles that were used as the foundation of AI Logic. AI really began to intrigue researchers with the invention of the computer in 1943. The technology was finally available, or so it seemed, to simulate intelligent behavior. Over the next four decades, despite many stumbling blocks, AI has grown from a dozen researchers, to thousands of engineers and specialists; and from programs capable of playing checkers, to systems designed to diagnose disease. • AI has always been on the pioneering end of computer science. Advanced-level computer languages, as well as computer interfaces and word-processors owe their existence to the research into artificial intelligence. The theory and insights brought about by AI research will set the trend in the future of computing. The products available today are only bits and pieces of what are soon to follow, but they are a movement towards the future of artificial intelligence. The advancements in the quest for artificial intelligence have, and will continue to affect our jobs, our education, and our lives. 7

The History of AI • One can consider Mc. Culloch ja Pitts (1943) to

The History of AI • One can consider Mc. Culloch ja Pitts (1943) to be the first AI publication • It demonstrates how a network of simple computation units, neurons, can be used to compute the logical connectives (and, or, not, etc. ) • It is shown that all computable functions can be computed using a neural network • It is suggested that these networks may be able to learn • Hebb (1949) gives a simple updating rule for teaching neural networks • Turing (1950) introduces his test, machine learning, genetic algorithms, and reinforcement learning • In 1956 John Mc. Carthy organized a meeting of researchers interested in the field, the name AI was invented 8

The History of AI (Cont) • From the very beginning central universities have been

The History of AI (Cont) • From the very beginning central universities have been CMU, MIT, and Stanford which are top universities in the field of AI even today • Mc. Carthy (1958) programming language Lisp • In the 1950’s and 1960’s huge leaps forward were made in operating within microworlds (e. g. , the blocks world) • Also robotics went forward: e. g. Shakey from SRI (1969) • As well research on neural networks (Widrow & Hoff, Rosenblatt’s perceptron) • Eventually it however became evident that the success within microworlds does not scale up as such • It had been obtained without a deeper understanding of the target problem and by using computationally intensive methods 9

The History of AI (Cont) • Neural networks were wiped out of computer science

The History of AI (Cont) • Neural networks were wiped out of computer science research for over a decade by Minsky and Papert’s proof of the poor expressive power of the perceptron (xor function) • In 1970’s expert systems were being developed, they gather the deep knowledge of one application field • Expert systems gained a better expertise than human experts in many fields and they became the first commercial success story of AI • Developing expert systems however turned out to be meticulous work that cannot really be made automatic • Logic programming had its brightest time in the mid 1980’s • Study of neural networks returned back to computer science research in the mid 1980’s 10

The History of AI (Cont) • Also the raise of machine learning research dates

The History of AI (Cont) • Also the raise of machine learning research dates back to the 1980’s • The research of Bayesian networks also started at that time • Maybe the second important commercial success due to the heavy influence of Microsoft • Later on these topics have been studied under the label of data mining and knowledge discovery • Agents are an important technology in many fields of computing • A recent trend is also direction towards analytic research instead of using just ad hoc techniques • Theoretical models of machine learning • Well-founded methods of planning • The new raise of game theory 11

The History of Artificial Intelligence (More Details) Timeline of major AI events Evidence of

The History of Artificial Intelligence (More Details) Timeline of major AI events Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with the development of the electronic computer in 1941, the technology finally became available to create machine intelligence. The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then Artificial Intelligence has expanded because of theories and principles developed by its dedicated researchers. Through its short modern history, advancement in the fields of AI have been slower than first estimated, progress continues to be made. From its birth 4 decades ago, there have been a variety 12 of AI programs, and they have impacted other technological advancements.

The History of Artificial Intelligence (More Details) (Cont) The Era of the Computer: •

The History of Artificial Intelligence (More Details) (Cont) The Era of the Computer: • In 1941 an invention revolutionized every aspect of the storage and processing of information. That invention, developed in both the US and Germany was the electronic computer. The first computers required large, separate airconditioned rooms, and were a programmers nightmare, involving the separate configuration of thousands of wires to even get a program running. • The 1949 innovation, the stored program computer, made the job of entering a program easier, and advancements in computer theory lead to computer science, and eventually Artificial intelligence. With the invention of an electronic means of processing data, came a medium that made AI 13 possible.

The History of Artificial Intelligence (More Details) (Cont) The Beginnings of AI: • Although

The History of Artificial Intelligence (More Details) (Cont) The Beginnings of AI: • Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory. The most familiar example of feedback theory is thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI. 14

The History of Artificial Intelligence (More Details) (Cont) • In late 1955, Newell and

The History of Artificial Intelligence (More Details) (Cont) • In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field. • In 1956 John Mc Carthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence. " From that point on, because of Mc. Carthy, the field would be known as Artificial intelligence. Although not a huge success, (explain) the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research. 15

The History of Artificial Intelligence (More Details) (Cont) • In the seven years after

The History of Artificial Intelligence (More Details) (Cont) • In the seven years after the conference, AI began to pick up momentum. Although the field was still undefined, ideas formed at the conference were re-examined, and built upon. Centers for AI research began forming at Carnegie Mellon and MIT, and new challenges were faced: further research was placed upon creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist. And second, making systems that could learn by themselves. • In 1957, the first version of a new program The General Problem Solver (GPS) was tested. The program developed by the same pair which developed the Logic Theorist. The GPS was an extension of Wiener's feedback principle, and was capable of solving a greater extent of common sense problems. A couple of years after the GPS, IBM contracted a team to research artificial intelligence. Herbert Gelerneter spent 3 years working on a program for solving geometry theorems. • While more programs were being produced, Mc. Carthy was busy developing a major breakthrough in AI history. In 1958 Mc. Carthy announced his new development; the LISP language, which is still used today. LISP stands for LISt Processing, and was soon adopted as the 16 language of choice among most AI developers.

The History of Artificial Intelligence (More Details) (Cont) • In 1963 MIT received a

The History of Artificial Intelligence (More Details) (Cont) • In 1963 MIT received a 2. 2 million dollar grant from the United States government to be used in researching Machine-Aided Cognition (artificial intelligence). The grant by the Department of Defense's Advanced research projects Agency (ARPA), to ensure that the US would stay ahead of the Soviet Union in technological advancements. The project served to increase the pace of development in AI research, by drawing computer scientists from around the world, and continues funding. 17

The History of Artificial Intelligence (More Details) (Cont) The Multitude of programs • The

The History of Artificial Intelligence (More Details) (Cont) The Multitude of programs • The next few years showed a multitude of programs, one notably was SHRDLU was part of the microworlds project, which consisted of research and programming in small worlds (such as with a limited number of geometric shapes). The MIT researchers headed by Marvin Minsky, demonstrated that when confined to a small subject matter, computer programs could solve spatial problems and logic problems. Other programs which appeared during the late 1960's were STUDENT, which could solve algebra story problems, and SIR which could understand simple English sentences. The result of these programs was a refinement in 18 language comprehension and logic.

The History of Artificial Intelligence (More Details) (Cont) • Another advancement in the 1970's

The History of Artificial Intelligence (More Details) (Cont) • Another advancement in the 1970's was the advent of the expert system. Expert systems predict the probability of a solution under set conditions. For example: Because of the large storage capacity of computers at the time, expert systems had the potential to interpret statistics, to formulate rules. And the applications in the market place were extensive, and over the course of ten years, expert systems had been introduced to forecast the stock market, aiding doctors with the ability to diagnose disease, and instruct miners to promising mineral locations. This was made possible because of the systems ability to store conditional rules, and a storage of information. • During the 1970's Many new methods in the development of AI were tested, notably Minsky's frames theory. Also David Marr proposed new theories about machine vision, for example, how it would be possible to distinguish an image based on the shading of an image, basic information on shapes, color, edges, and texture. With analysis of this information, frames of what an image might be could then be referenced. another development during this time was the PROLOGUE language. The language was proposed for In 1972, 19

The History of Artificial Intelligence (More Details) (Cont) • During the 1980's AI was

The History of Artificial Intelligence (More Details) (Cont) • During the 1980's AI was moving at a faster pace, and further into the corporate sector. In 1986, US sales of AIrelated hardware and software surged to $425 million. Expert systems in particular demand because of their efficiency. Companies such as Digital Electronics were using XCON, an expert system designed to program the large VAX computers. Du. Pont, General Motors, and Boeing relied heavily on expert systems Indeed to keep up with the demand for the computer experts, companies such as Teknowledge and Intellicorp specializing in creating software to aid in producing expert systems formed. Other expert systems were designed to find and 20 correct flaws in existing expert systems.

The History of Artificial Intelligence (More Details) (Cont) The Transition from Lab to Life

The History of Artificial Intelligence (More Details) (Cont) The Transition from Lab to Life • The impact of the computer technology, AI included was felt. No longer was the computer technology just part of a select few researchers in laboratories. The personal computer made its debut along with many technological magazines. Such foundations as the American Association for Artificial Intelligence also started. There was also, with the demand for AI development, a push for researchers to join private companies. 150 companies such as DEC which employed its AI research group of 700 personnel, spend $1 billion on internal AI groups. • Other fields of AI also made there way into the marketplace during the 1980's. One in particular was the machine vision field. The work by Minsky and Marr were now the foundation for the cameras and computers on assembly lines, performing quality control. Although crude, these systems could distinguish differences shapes in objects using black and white differences. By 1985 over a hundred companies offered machine vision systems in the US, and sales 21 totaled $80 million.

The History of Artificial Intelligence (More Details) (Cont) The Transition from Lab to Life

The History of Artificial Intelligence (More Details) (Cont) The Transition from Lab to Life (Cont) • The 1980's were not totally good for the AI industry. In 1986 -87 the demand in AI systems decreased, and the industry lost almost a half of a billion dollars. Companies such as Teknowledge and Intellicorp together lost more than $6 million, about a third of there total earnings. The large losses convinced many research leaders to cut back funding. Another disappointment was the so called "smart truck" financed by the Defense Advanced Research Projects Agency. The projects goal was to develop a robot that could perform many battlefield tasks. In 1989, due to project setbacks and unlikely success, the Pentagon cut funding for the project. • Despite these discouraging events, AI slowly recovered. New technology in Japan was being developed. Fuzzy logic, first pioneered in the US has the unique ability to make decisions under uncertain conditions. Also neural networks were being reconsidered as possible ways of achieving Artificial Intelligence. The 1980's introduced to its place in the corporate marketplace, and showed the technology had real life uses, ensuring it would be a key in the 21 st century. 22

The History of Artificial Intelligence (More Details) (Cont) AI put to the Test •

The History of Artificial Intelligence (More Details) (Cont) AI put to the Test • The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home. With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available. Also AI technology has made steadying camcorders simple using fuzzy logic. With a greater demand for AIrelated technology, new advancements are becoming available. Inevitably Artificial Intelligence has, and will 23 continue to affecting our lives.

The State of the Art Different activities in many subfields: • Robotic vehicles: Driverless

The State of the Art Different activities in many subfields: • Robotic vehicles: Driverless robotic cars are being developed in closed environments and more in daily traffic. Modern cars recognize speed limits, adapt to the traffic, take care of pedestrian safety, can park themselves, have intelligent light systems, wake up the driver, … • Speech recognition: Many devices and services nowadays understand spoken words (even dialogs) • Autonomous planning and scheduling: E. g. space missions are tomorrow planned autonomously • Game playing: Computers defeat human world champions in chess systematically and convincingly • Spam fighting: Learning algorithms reliably filter away 80% or 90% 24 of all messages saving us time for more important tasks

The State of the Art (Cont) Different activities in many subfields (Cont): • Logistics

The State of the Art (Cont) Different activities in many subfields (Cont): • Logistics planning: E. g. military operations are helped by automated logistics planning and scheduling for transportation • Robotics: Autonomous vacuum cleaners, lawn movers, toys, and special (hazardous) environment robots are common these days • Machine translation: Translation programs based on statistics and machine learning are in ever increasing demand (in particular in EU) 25

Approaches In the quest to create intelligent machines, the field of Artificial Intelligence has

Approaches In the quest to create intelligent machines, the field of Artificial Intelligence has split into several different approaches based on the opinions about the most promising methods and theories. These rivaling theories have lead researchers in one of two basic approaches; bottom-up and top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach attempts to mimic the brain's behavior with computer programs. 26

Neural Networks and Parallel Computation The human brain is made up of a web

Neural Networks and Parallel Computation The human brain is made up of a web of billions of cells called neurons, and understanding its complexities is seen as one of the last frontiers in scientific research. It is the aim of AI researchers who prefer this bottom-up approach to construct electronic circuits that act as neurons do in the human brain. Although much of the working of the brain remains unknown, the complex network of neurons is what gives humans intelligent characteristics. By itself, a neuron is not intelligent, but when grouped together, neurons are able to pass electrical signals through networks. The neuron "firing", passing a signal to the next in the chain 27

Neural Networks and Parallel Computation (Cont) • Research has shown that a signal received

Neural Networks and Parallel Computation (Cont) • Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred to the next neuron, the signal must be converted from electrical to chemical energy. The signal can then be received by the next neuron and processed. • Warren Mc. Culloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, Mc. Culloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated 28 neural networks, also know as Parallel computing.

Neural Networks and Parallel Computation (Cont) • A century earlier the true / false

Neural Networks and Parallel Computation (Cont) • A century earlier the true / false nature of binary numbers was theorized in 1854 by George Boole in his postulates concerning the Laws of Thought. Boole's principles make up what is known as Boolean algebra, the collection of logic concerning AND, OR, NOT operands. For example according to the Laws of thought the statement: (for this example consider all apples red) - Apples are red-- is True - Apples are red AND oranges are purple-- is False - Apples are red OR oranges are purple-- is True - Apples are red AND oranges are NOT purple-- is also True • Boole also assumed that the human mind works according to these laws, it performs logical operations that could be reasoned. Ninety years later, Claude Shannon applied Boole's principles in circuits, the blueprint for electronic computers. Boole's contribution to the future of computing and Artificial Intelligence was immeasurable, and his logic is the basis of 29 neural networks.

Neural Networks and Parallel Computation (Cont) Mc. Culloch and Pitts, using Boole's principles, wrote

Neural Networks and Parallel Computation (Cont) Mc. Culloch and Pitts, using Boole's principles, wrote a paper on neural network theory. The thesis dealt with how the networks of connected neurons could perform logical operations. It also stated that, one the level of a single neuron, the release or failure to release an impulse was the basis by which the brain makes true / false decisions. Using the idea of feedback theory, they described the loop which existed between the senses ---> brain ---> muscles, and likewise concluded that Memory could be defined as the signals in a closed loop of neurons. Although we now know that logic in the brain occurs at a level higher then Mc. Culloch and Pitts theorized, their contributions were important to AI because they showed how the firing of signals between connected neurons could cause the brains to make decisions. Mc. Culloch and Pitt's theory is the basis of the artificial neural network theory. 30

Neural Networks and Parallel Computation (Cont) • Using this theory, Mc. Culloch and Pitts

Neural Networks and Parallel Computation (Cont) • Using this theory, Mc. Culloch and Pitts then designed electronic replicas of neural networks, to show electronic networks could generate logical processes. They also stated that neural networks may, in the future, be able to learn, and recognize patterns. The results of their research and two of Weiner's books served to increase enthusiasm, and laboratories of computer simulated neurons were set up across the country. • Two major factors have inhibited the development of full scale neural networks. Because of the expense of constructing a machine to simulate neurons, it was expensive even to construct neural networks with the number of neurons in an ant. Although the cost of components have decreased, the computer would have to grow thousands of times larger to be on the scale of the human brain. The second factor is current computer architecture. The standard Von Neuman computer, the architecture of nearly all computers, lacks an adequate number of pathways between components. Researchers are now developing alternate architectures for use with neural networks. 31

Neural Networks and Parallel Computation (Cont) • Even with these inhibiting factors, artificial neural

Neural Networks and Parallel Computation (Cont) • Even with these inhibiting factors, artificial neural networks have presented some impressive results. Frank Rosenblatt, experimenting with computer simulated networks, was able to create a machine that could mimic the human thinking process, and recognize letters. But, with new top-down methods becoming popular, parallel computing was put on hold. Now neural networks are making a return, and some researchers believe that with new computer architectures, parallel computing and the bottom-up theory will be a driving factor in creating artificial intelligence. 32