# Forward and Backward Chaining RuleBased Systems w Instead

Forward and Backward Chaining

Rule-Based Systems w Instead of representing knowledge in a relatively declarative, static way (as a bunch of things that are true), rule-based system represent knowledge in terms of a bunch of rules that tell you what you should do or what you could conclude in different situations. w A rule-based system consists of a bunch of IFTHEN rules, a bunch of facts, and some interpreter controlling the application of the rules, given the facts.

Two broad kinds of rule system Ø Forward chaining systems and Ø Backward chaining systems. Forward chaining is a data driven method of deriving a particular goal from a given knowledge base and set of inference rules. Inference rules are applied by matching facts to the antecedents of consequence relations in the knowledge base

Cont’d… w In a forward chaining system you start with the initial facts, and keep using the rules to draw new conclusions (or take certain actions) given those facts. w Forward chaining systems are primarily data-driven w Inference rules are successively applied to elements of the knowledge base until the goal is reached

Cont’d… w A search control method is needed to select which element(s) of the knowledge base to apply the inference rule to at any point in the deduction. w facts in the system are represented in a working memory which is continually updated.

Cont’d… w Rules in the system represent possible actions to take when specified conditions hold on items in the working memory w they are sometimes called condition-action rules w The conditions are usually patterns that must match items in the working memory.

Forward Chaining Systems w actions usually involve adding or deleting items from the working memory. w interpreter controls the application of the rules, given the working memory, thus controlling the system's activity. w It is based on a cycle of activity sometimes known as a recognize-act cycle w The system first checks to find all the rules whose conditions hold w selects one and performs the actions in the action part of the rule w selection of a rule to fire is based on fixed strategies, known as conflict resolution strategies

Forward Chaining Systems w The actions will result in a new working memory, and the cycle begins again. w This cycle will be repeated until either no rules fire, or some specified goal state is satisfied. w Rule-based systems vary greatly in their details and syntax, so the following examples are only illustrative

Forward Chaining Example w Knowledge Base: – – – If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] w Goal: – [Fritz is colored Y]?

Forward Chaining Example Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X croaks and eats flies] Then [X is a frog] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is a frog] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X croaks and eats flies] Then [X is a frog] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz is a frog] [Fritz croaks and eats flies] [Fritz is a frog] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example Knowledge Base If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz is a frog] [Fritz croaks and eats flies] [Fritz is a frog] ? CPSC 433 Artificial Intelligence Goal [Fritz is colored Y]?

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] [Fritz is a frog] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is a frog] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is a frog] [Fritz is colored green] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is colored green] [Fritz is a frog] [Fritz is colored green] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz is a frog] Knowledge Base If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is colored green] [Fritz is a frog] [Fritz is colored green] ? CPSC 433 Artificial Intelligence Goal [Fritz is colored Y]?

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] [Fritz is a frog] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is colored green] [Fritz is a frog] [Fritz is colored green] Goal [Fritz is colored Y]? CPSC 433 Artificial Intelligence

Forward Chaining Example If [X croaks and eats flies] Then [X is a frog] [Fritz croaks and eats flies] Knowledge Base If [X croaks and eats flies] Then [X is a frog] If [X chirps and sings] Then [X is a canary] If [X is a frog] Then [X is colored green] If [X is a canary] Then [X is colored yellow] [Fritz croaks and eats flies] [Fritz is colored green] [Fritz is colored Y] ? [Fritz is a frog] [Fritz is colored green] Goal [Fritz is colored Y]? Y = green CPSC 433 Artificial Intelligence

Backward chaining w Backward chaining is a goal driven method of deriving a particular goal from a given knowledge base and set of inference rules w Inference rules are applied by matching the goal of the search to the consequents of the relations stored in the knowledge base

Cont’d. . . w When such a relation is found, the antecedent of the relation is added to the list of goals (and not into the knowledge base, as is done in forward chaining) w Search proceeds in this manner until a goal can be matched against a fact in the knowledge base

Cont’d… w As with forward chaining, a search control method is needed to select which goals will be matched against which consequence relations from the knowledge base w backward chaining systems are goal-driven

Backward Chaining Systems w In a backward chaining system you start with some hypothesis (or goal) you are trying to prove, and keep looking for rules that would allow you to conclude that hypothesis, perhaps setting new sub goals to prove as you go

Cont’d w So far we have looked at how rule-based systems can be used to draw new conclusions from existing data, adding these conclusions to a working memory w This approach is most useful when you know all the initial facts, but don't have much idea what the conclusion might be w If you DO know what the conclusion might be, or have some specific hypothesis to test, forward chaining systems may be inefficient

backward chaining system w Note that a backward chaining system does NOT need to update a working memory w Instead it needs to keep track of what goals it needs to prove its main hypothesis. w Lets take an example w ……………. .

Forward chaining vs. backward chaining w Data-driven, forward chaining – Starts with the initial given data and search for the goal. – At each iteration, new conclusion (RHS) becomes the pattern to look for next – Working memory contains true sentences (RHS’s). – Stop when the goal is reached. w Goal-driven is the reverse. – Starts with the goal and try to search for the initial given data. – At each iteration, new premise (LHS) becomes the new sub goals, the pattern to look for next – working memory contains sub goals (LHS’s) to be satisfied. – Stop when all the premises (sub goals) of fired productions are reached. w Sense of the arrow is in reality reversed. w Both repeatedly pick the next rule to fire.

Cont’d…. . condition action premise conclusion Forward chaining Backward chaining Starts with premise conclusion Search for conclusion premise Working memory true sub goals statements to be proved Stopping criteria goal is reached Initial data are reached Datadriven Goaldriven

Combining forward- and backward-chaining w Begin with data and search forward until the number of states becomes unmanageably large. w Switch to goal-directed search to use sub goals to guide state selection.

Strategies w These strategies may help in getting reasonable behavior from a forward chaining system, w but the most important thing is how we write the rules. w They should be carefully constructed, with the preconditions specifying as precisely as possible when different rules should fire. w Otherwise we will have little idea or control of what will happen

Conclusion w Goal-driven, backward chaining w Data-driven, forward chaining w conditions first, then actions w working memory contains true statements describing the current environment w actions first, then conditions (sub goals) w working memory contains sub goals to be shown as true

Forwards vs. Backwards Reasoning w Whether you use forward or backwards reasoning to solve a problem depends on the properties of your rule set and initial facts. w Sometimes, if you have some particular goal (to test some hypothesis), then backward chaining will be much more efficient, as you avoid drawing conclusions from irrelevant facts.

w However, sometimes backward chaining can be very wasteful - there may be many possible ways of trying to prove something, and you may have to try almost all of them before you find one that works. w Forward chaining may be better if you have lots of things you want to prove w when you have a small set of initial facts; and when there tend to be lots of different rules which allow you to draw the same conclusion w Backward chaining may be better if you are trying to prove a single fact, given a large set of initial facts, and where, if you used forward chaining, lots of rules would be eligible to fire in any cycle.

- Slides: 33