Lecture 2 Extended Fuzzy Logic Fuzzy Logic was

  • Slides: 69
Download presentation
Lecture 2 Extended

Lecture 2 Extended

Fuzzy Logic Ø Fuzzy Logic was initiated in 1965, by Dr. Lotfi A. Zadeh,

Fuzzy Logic Ø Fuzzy Logic was initiated in 1965, by Dr. Lotfi A. Zadeh, professor for computer science at the university of California in Berkley. Ø Basically, Fuzzy Logic is a multivalued logic, that allows intermediate values to be defined between conventional evaluations like true/false, yes/no, high/low, etc. Ø Fuzzy Logic starts with and builds on a set of user–supplied human language rules. Ø Fuzzy Systems convert these rules to their mathematical equivalents. Ø This simplifies the job of the system designer and the computer, and results in much more accurate representations of the way system behaves in real world. Ø Fuzzy Logic provides a simple way to arrive at a definite conclusion based upon vague, ambiguous, imprecise, noisy, or missing input information.

Fuzzy Logic Ø What is Fuzzy Logic? Fuzzy Logic is a superset of conventional

Fuzzy Logic Ø What is Fuzzy Logic? Fuzzy Logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth, i. e. truth values between “completely true” and “completely false”.

Definitions Ø Universe of Discourse: The Universe of Discourse is the range of all

Definitions Ø Universe of Discourse: The Universe of Discourse is the range of all possible values for an input to a fuzzy system. Ø Fuzzy Set: A Fuzzy Set is any set that allows its members to have different grades of membership (membership function) in the interval [0, 1]. Ø Support: The Support of a fuzzy set F is the crisp set of all points in the Universe of Discourse U such that the membership function of F is non-zero. Ø Crossover point: The Crossover point of a fuzzy set is the element in U at which its membership function is 0. 5. Ø Fuzzy Singleton: A Fuzzy singleton is a fuzzy set whose support is a single point in U with a membership function of one.

Fuzzy Logic Ø How Fuzzy Logic works? - In Fuzzy Logic, unlike standard conditional

Fuzzy Logic Ø How Fuzzy Logic works? - In Fuzzy Logic, unlike standard conditional logic, the truth of any statement is a matter of degree. (e. g How cold is it? How high shall we set the heat? ) - The degree to which any Fuzzy statement is true is denoted by a value between 0 and 1. - Fuzzy Logic needs to be able to manipulate degrees of “may be” in addition to true and false. Ø Example: tall(x) = { 0, if height(x) < 5 ft. , 1 (height(x)-5 ft. )/2 ft. , if 5 ft. <= height (x) <= 7 ft. , 0. 5 1, if height(x) > 7 ft. } 0 U: universe of discourse (i. e. set of people) TALL: Fuzzy Subset 5 7 Height, ft.

Fuzzy Logic (contd. ) Ø Given the above definitions, here are some example values.

Fuzzy Logic (contd. ) Ø Given the above definitions, here are some example values. Person Height degree of tallness ----------------------Billy 3' 2" 0. 00 Yoke 5' 5" 0. 21 Drew 5' 9" 0. 38 Erik 5' 10" 0. 42 Mark 6' 1" 0. 54 Kareem 7' 2" 1. 00 Ø From this definitions, we can say that, - the degree of truth of the statement “Drew is TALL” is 0. 38.

Fuzzy Sets Ø In classical mathematics we are familiar with what we call crisp

Fuzzy Sets Ø In classical mathematics we are familiar with what we call crisp sets. In this method, the characteristic function assigns a number 1 or 0 to each element in the set, depending on whether the element is in the subset A or not. A 1 0 0. 5 1 In set A 0 Not in set A 0. 8 Ø This concept is sufficient for many areas of application, but it lacks flexibility for some applications like classification of remotely sensed data analysis. Ø The membership function is a graphical representation of the magnitude of participation of each input. It associates weighting with each of the inputs A that are processed. 1 0 0. 4

Fuzzy Sets (contd. ) Ø Membership Functions representing three fuzzy sets for the variable

Fuzzy Sets (contd. ) Ø Membership Functions representing three fuzzy sets for the variable “height”.

Fuzzy Sets (contd. ) Ø Standard Membership Functions: - Single-Valued, or Singleton - Triangular

Fuzzy Sets (contd. ) Ø Standard Membership Functions: - Single-Valued, or Singleton - Triangular - Trapezoidal - S- Function (Sigmoid Function) Different types of Membership Functions.

Operations on Fuzzy Sets Fuzzy AND:

Operations on Fuzzy Sets Fuzzy AND:

Operations on Fuzzy Sets (contd. ) Fuzzy OR:

Operations on Fuzzy Sets (contd. ) Fuzzy OR:

Operations on Fuzzy Sets (contd. ) Fuzzy NOT:

Operations on Fuzzy Sets (contd. ) Fuzzy NOT:

Properties The following rules which are common in classical set theory also apply to

Properties The following rules which are common in classical set theory also apply to Fuzzy Logic. Ø De Morgan's Law: Ø Associativity: Ø Commutativity: Ø Distributivity:

Probability Vs Fuzzy Logic Probability Measure Membership Function Before an event happens After it

Probability Vs Fuzzy Logic Probability Measure Membership Function Before an event happens After it happened Measure Theory Set Theory Domain is 2 U (Boolean Algebra) Domain is [0, 1]U (Cannot be a Boolean Algebra)

Fuzzy Systems Fuzzification Inputs Fuzzy Inference Defuzzification Outputs

Fuzzy Systems Fuzzification Inputs Fuzzy Inference Defuzzification Outputs

Fuzzy Classification Ø Fuzzy classifiers are one application of fuzzy theory. Ø Expert knowledge

Fuzzy Classification Ø Fuzzy classifiers are one application of fuzzy theory. Ø Expert knowledge is used and can be expressed in a very natural way using linguistic variables, which are described by fuzzy sets. ØFor Example: consider two variables Entropy H and α-angle. These variables can be modeled as; Very Low Medium High H 0 1 Low α 0 Medium High

Fuzzy Classification (contd. ) Ø In fuzzy classification, a sample can have membership in

Fuzzy Classification (contd. ) Ø In fuzzy classification, a sample can have membership in many different classes to different degrees. Typically, the membership values are constrained so that all of the membership values for a particular sample sum to 1. Ø Now the expert knowledge for this variable can be formulated as a rule like IF Entropy high AND α high THEN Class = class 4 Ø The rules can be combined in a table, called as rule base.

Fuzzy Classification (contd. ) Entropy α Class Very low Low Class 1 Low Medium

Fuzzy Classification (contd. ) Entropy α Class Very low Low Class 1 Low Medium Class 2 Medium High Class 3 High Class 4 Example for a fuzzy rule base

Fuzzy Classification (contd. ) Ø Linguistic rules describing the control system consist of two

Fuzzy Classification (contd. ) Ø Linguistic rules describing the control system consist of two parts; an antecedent block (between the IF and THEN) and a consequent block (following THEN). Ø Depending on the system, it may not be necessary to evaluate every possible input combination, since some may rarely or never occur. Ø Optimum evaluation is usually done by experienced operators. Ø The inputs are combined logically using the AND operator to produce output response values for all expected inputs. The active conclusions are then combined to logical sum for each membership function. Ø Finally, all that remains is combined in defuzzyfication process to produce the crisp output.

Fuzzy Classification (contd. ) Ø To obtain a crisp decision from this fuzzy output,

Fuzzy Classification (contd. ) Ø To obtain a crisp decision from this fuzzy output, we have to defuzzify the fuzzy set. Therefore, we have to choose one representative value. Ø There are several methods of defuzzification, one of them is to take the center of gravity of the fuzzy set. This is a widely used method for fuzzy sets. Ø For Example: 1 Final output Defuzzification using the center of gravity approach

Fuzzy Classification (contd. ) Ø Another Example:

Fuzzy Classification (contd. ) Ø Another Example:

Neural Networks Yan Ke China Jiliang University

Neural Networks Yan Ke China Jiliang University

Outline ß ß ß ß Introduction Background How the human brain works A Neuron

Outline ß ß ß ß Introduction Background How the human brain works A Neuron Model A Simple Neuron Pattern Recognition example A Complicated Perceptron

Outline Continued ß ß ß ß Different types of Neural Networks Network Layers and

Outline Continued ß ß ß ß Different types of Neural Networks Network Layers and Structure Training a Neural Network Learning process Neural Networks in use Use of Neural Networks in C. A. I. S. project Conclusion

Introduction ß What are Neural Networks? Þ Neural networks are a new method of

Introduction ß What are Neural Networks? Þ Neural networks are a new method of programming computers. Þ They are exceptionally good at performing pattern recognition and other tasks that are very difficult to program using conventional techniques. Þ Programs that employ neural nets are also capable of learning on their own and adapting to changing conditions.

Background ß ß ß An Artificial Neural Network (ANN) is an information processing paradigm

Background ß ß ß An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the biological nervous systems, such as the human brain’s information processing mechanism. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. NNs, like people, learn by example. An NN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of NNs as well.

How the Human Brain learns ß ß ß In the human brain, a typical

How the Human Brain learns ß ß ß In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurons.

A Neuron Model ß ß ß When a neuron receives excitatory input that is

A Neuron Model ß ß ß When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes. We conduct these neural networks by first trying to deduce the essential features of neurons and their interconnections. We then typically program a computer to simulate these features.

A Simple Neuron ß ß An artificial neuron is a device with many inputs

A Simple Neuron ß ß An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode.

A Simple Neuron (Cont. ) ß ß ß In the training mode, the neuron

A Simple Neuron (Cont. ) ß ß ß In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not. The firing rule is an important concept in neural networks and accounts for their high flexibility. A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained on previously.

Pattern Recognition ß ß ß An important application of neural networks is pattern recognition.

Pattern Recognition ß ß ß An important application of neural networks is pattern recognition. Pattern recognition can be implemented by using a feed-forward neural network that has been trained accordingly. During training, the network is trained to associate outputs with input patterns. When the network is used, it identifies the input pattern and tries to output the associated output pattern. The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pattern.

Pattern Recognition (cont. ) ß Suppose a network is trained to recognize the patterns

Pattern Recognition (cont. ) ß Suppose a network is trained to recognize the patterns T and H. The associated patterns are all black and all white respectively as shown above.

Pattern Recognition (cont. ) Since the input pattern looks more like a ‘T’, when

Pattern Recognition (cont. ) Since the input pattern looks more like a ‘T’, when the network classifies it, it sees the input closely resembling ‘T’ and outputs the pattern that represents a ‘T’.

Pattern Recognition (cont. ) The input pattern here closely resembles ‘H’ with a slight

Pattern Recognition (cont. ) The input pattern here closely resembles ‘H’ with a slight difference. The network in this case classifies it as an ‘H’ and outputs the pattern representing an ‘H’.

Pattern Recognition (cont. ) ß ß Here the top row is 2 errors away

Pattern Recognition (cont. ) ß ß Here the top row is 2 errors away from a ‘T’ and 3 errors away from an H. So the top output is a black. The middle row is 1 error away from both T and H, so the output is random. The bottom row is 1 error away from T and 2 away from H. Therefore the output is black. Since the input resembles a ‘T’ more than an ‘H’ the output of the network is in favor of a ‘T’.

A Complicated Perceptron ß ß ß A more sophisticated Neuron is know as the

A Complicated Perceptron ß ß ß A more sophisticated Neuron is know as the Mc. Culloch and Pitts model (MCP). The difference is that in the MCP model, the inputs are weighted and the effect that each input has at decision making, is dependent on the weight of the particular input. The weight of the input is a number which is multiplied with the input to give the weighted input.

A Complicated Perceptron (cont. ) ß ß ß The weighted inputs are then added

A Complicated Perceptron (cont. ) ß ß ß The weighted inputs are then added together and if they exceed a pre-set threshold value, the perceptron / neuron fires. Otherwise it will not fire and the inputs tied to that perceptron will not have any effect on the decision making. In mathematical terms, the neuron fires if and only if; X 1 W 1 + X 2 W 2 + X 3 W 3 +. . . > T

A Complicated Perceptron ß ß The MCP neuron has the ability to adapt to

A Complicated Perceptron ß ß The MCP neuron has the ability to adapt to a particular situation by changing its weights and/or threshold. Various algorithms exist that cause the neuron to 'adapt'; the most used ones are the Delta rule and the back error propagation.

Different types of Neural Networks ß Feed-forward networks Þ Þ Þ Feed-forward NNs allow

Different types of Neural Networks ß Feed-forward networks Þ Þ Þ Feed-forward NNs allow signals to travel one way only; from input to output. There is no feedback (loops) i. e. the output of any layer does not affect that same layer. Feed-forward NNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organization is also referred to as bottom-up or top-down.

Continued ß Feedback networks Þ Þ Feedback networks can have signals traveling in both

Continued ß Feedback networks Þ Þ Feedback networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent, although the latter term is often used to denote feedback connections in single-layer organizations.

Diagram of an NN Fig: A simple Neural Network

Diagram of an NN Fig: A simple Neural Network

Network Layers ß ß ß Input Layer - The activity of the input units

Network Layers ß ß ß Input Layer - The activity of the input units represents the raw information that is fed into the network. Hidden Layer - The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units. Output Layer - The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.

Continued ß ß This simple type of network is interesting because the hidden units

Continued ß ß This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.

Network Structure ß ß The number of layers and of neurons depend on the

Network Structure ß ß The number of layers and of neurons depend on the specific task. In practice this issue is solved by trial and error. Two types of adaptive algorithms can be used: Þ Þ start from a large network and successively remove some neurons and links until network performance degrades. begin with a small network and introduce new neurons until performance is satisfactory.

Network Parameters ß ß ß How are the weights initialized? How many hidden layers

Network Parameters ß ß ß How are the weights initialized? How many hidden layers and how many neurons? How many examples in the training set?

Weights ß ß In general, initial weights are randomly chosen, with typical values between

Weights ß ß In general, initial weights are randomly chosen, with typical values between -1. 0 and 1. 0 or -0. 5 and 0. 5. There are two types of NNs. The first type is known as Þ Þ Fixed Networks – where the weights are fixed Adaptive Networks – where the weights are changed to reduce prediction error.

Size of Training Data ß Rule of thumb: Þ ß the number of training

Size of Training Data ß Rule of thumb: Þ ß the number of training examples should be at least five to ten times the number of weights of the network. Other rule: |W|= number of weights a = expected accuracy on test set

Training Basics ß ß ß The most basic method of training a neural network

Training Basics ß ß ß The most basic method of training a neural network is trial and error. If the network isn't behaving the way it should, change the weighting of a random link by a random amount. If the accuracy of the network declines, undo the change and make a different one. It takes time, but the trial and error method does produce results.

Training: Backprop algorithm ß ß The Backprop algorithm searches for weight values that minimize

Training: Backprop algorithm ß ß The Backprop algorithm searches for weight values that minimize the total error of the network over the set of training examples (training set). Backprop consists of the repeated application of the following two passes: Þ Forward pass: in this step the network is activated on one example and the error of (each neuron of) the output layer is computed. Þ Backward pass: in this step the network error is used for updating the weights. Starting at the output layer, the error is propagated backwards through the network, layer by layer. This is done by recursively computing the local gradient of each neuron.

Back Propagation l Back-propagation training algorithm Network activation Forward Step Error propagation Backward Step

Back Propagation l Back-propagation training algorithm Network activation Forward Step Error propagation Backward Step l Backprop adjusts the weights of the NN in order to minimize the network total mean squared error.

Architecture Feedforward Network A single-layer network of S logsig neurons having R inputs is

Architecture Feedforward Network A single-layer network of S logsig neurons having R inputs is shown below in full detail on the left and with a layer diagram on the right.

Architecture Feedforward Network Feedforward networks often have one or more hidden layers of sigmoid

Architecture Feedforward Network Feedforward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear and linear relationships between input and output vectors. The linear output layer lets the network produce values outside the range -1 to +1. On the other hand, if you want to constrain the outputs of a network (such as between 0 and 1), then the output layer should use a sigmoid transfer function (such as logsig).

Learning Algorithm: Backpropagation The following slides describes teaching process of multi-layer neural network employing

Learning Algorithm: Backpropagation The following slides describes teaching process of multi-layer neural network employing backpropagation algorithm. To illustrate this process the three layer neural network with two inputs and one output, which is shown in the picture below, is used:

Learning Algorithm: Backpropagation Each neuron is composed of two units. First unit adds products

Learning Algorithm: Backpropagation Each neuron is composed of two units. First unit adds products of weights coefficients and input signals. The second unit realise nonlinear function, called neuron transfer (activation) function. Signal e is adder output signal, and y = f(e) is output signal of nonlinear element. Signal y is also output signal of neuron.

Learning Algorithm: Backpropagation To teach the neural network we need training data set. The

Learning Algorithm: Backpropagation To teach the neural network we need training data set. The training data set consists of input signals (x 1 and x 2 ) assigned with corresponding target (desired output) z. The network training is an iterative process. In each iteration weights coefficients of nodes are modified using new data from training data set. Modification is calculated using algorithm described below: Each teaching step starts with forcing both input signals from training set. After this stage we can determine output signals values for each neuron in each network layer.

Learning Algorithm: Backpropagation Pictures below illustrate how signal is propagating through the network, Symbols

Learning Algorithm: Backpropagation Pictures below illustrate how signal is propagating through the network, Symbols w(xm)n represent weights of connections between network input xm and neuron n in input layer. Symbols yn represents output signal of neuron n.

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation Propagation of signals through the hidden layer. Symbols wmn represent weights

Learning Algorithm: Backpropagation Propagation of signals through the hidden layer. Symbols wmn represent weights of connections between output of neuron m and input of neuron n in the next layer.

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation

Learning Algorithm: Backpropagation Propagation of signals through the output layer.

Learning Algorithm: Backpropagation Propagation of signals through the output layer.

Learning Algorithm: Backpropagation In the next algorithm step the output signal of the network

Learning Algorithm: Backpropagation In the next algorithm step the output signal of the network y is compared with the desired output value (the target), which is found in training data set. The difference is called error signal d of output layer neuron

Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single

Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.

Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single

Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.

Learning Algorithm: Backpropagation The weights' coefficients wmn used to propagate errors back are equal

Learning Algorithm: Backpropagation The weights' coefficients wmn used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the other). This technique is used for all network layers. If propagated errors came from few neurons they are added. The illustration is below:

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights

Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).