Financial Informatics XIV Basic Principles Khurshid Ahmad Professor
Financial Informatics –XIV: Basic Principles Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2, IRELAND November 19 th, 2008. https: //www. cs. tcd. ie/Khurshid. Ahmad/Teaching. html 1 1
Neural Networks Artificial Neural Networks The basic premise of the course, Neural Networks, is to introduce our students to an alternative paradigm of building information systems. 2
Artificial Neural Networks An ANN system can be characterised by • its ability to learn; • its dynamic capability; and • its interconnectivity 3
Artificial Neural Networks: An Operational View Neuron xk x 1 wk 2 x 3 wk 3 x 4 wk 4 Summing Junction S Activation Function yk Output Signal Input Signals x 2 wk 1 bk 4
Artificial Neural Networks: An Operational View A neuron is an information processing unit forming the key ingredient of a neural network: The diagram above is a model of a biological neuron. There are three key ingredients of this neuron labelled xk which is connected to the (rest of the) neurons in the network labelled x 1, x 2, x 3, …xj. A set of links, the biological equivalent of synapses, which the kth neuron has with the (rest of the) neurons in the network. Note that each link has a WEIGHT denoted by labelled wk 1, wk 2, …wkj, where the first subscript (k in this case) denotes the recipient neurons and the second subscript (1, 2, 3…. . j) denotes the neurons transmitting to the recipient neurons. The synaptic weight wkj may lie in a range that includes negative (inhibitory) values and positive (excitatory) values. (From Haykin 1999: 10 -12) 5
Artificial Neural Networks: An Operational View The kth neuron adds up the inputs of all the transmitting neurons at the summing junction or the adder, denoted by . The adder acts as a linear combiner and generates a weighted average usually denoted by uk: uk = wk 1 i*x 1 + wk 2*x 2 + wk 3*x 3 + ………. + wkj*xj; the bias (bk )has the effect of increasing or decreasing the net input to the activation function depending on the value of the bias. (From Haykin 1999: 10 -12) 6
ANN’s: an Operational View Finally, the linear combination, denoted as vk = uk + bk, is passed through the activation function which engenders the non-linear behaviour seen in the behaviour of the biological neurons: the inputs to and outputs from a given neuron show a complex, often non-linear behaviour. For example, if the output from the adder was positive or zero then the neuron will emit a signal, yk = 1 if (vk) 0 , however if the output from the adder was negative then there will be no output, yk = 0 if (vk)< 0. There are other models of the activiation function as we will see later. (From Haykin 1999: 10 -12) 7
ANN’s: an Operational View Neuron xk x 1 wk 2 x 3 wk 3 x 4 wk 4 Summing Junction S Activation Function yk Output Signal Input Signals x 2 wk 1 bk 8
ANN’s: an Operational View Neuron xk x 1 wk 2 x 3 wk 3 x 4 wk 4 Summing Junction S Activation Function yk Output Signal Input Signals x 2 wk 1 bk 9
ANN’s: an Operational View Discontinuous Output Neuron xk wk 2 x 3 wk 3 x 4 wk 4 Activation Function S yk Output Signal Input Signals x 2 Summing Junction wk 1 bk Threshold (θ) Output f(net) x 1 No output (Normalised) output (eg. 1) net 10
ANN’s: an Operational View Neuron xk x 1 wk 2 x 3 wk 3 x 4 wk 4 Summing Junction S Activation Function yk Output Signal Input Signals x 2 wk 1 bk The notion of a discontinuous function simulates the fundamental notion that biological neurons usually fire if there is ‘enough’ stimulus available in the environment. But discontinuous is biologically implausible, so there must be some degree of continuity in the output such that an artificial neuron has a degree of biological plausibility. 11
ANN’s: an Operational View Pseudo-Continuous Output Neuron xk wk 2 x 3 wk 3 x 4 wk 4 Activation Function S yk Output Signal Input Signals x 2 Summing Junction wk 1 bk Saturation Threshold (θ’) f(net) x 1 Output β Threshold (θ) Output=α net 12
ANN’s: an Operational View x 1 x 3 x 4 wk 1 wk 2 Summing Junction Activation Function S yk wk 3 wk 4 bk A schematic for an 'electronic' neuron Output Signal Input Signals x 2 Neuron xk 13
ANN’s: an Operational View Neural Nets as directed graphs A directed graph is a geometrical object consisting of a set of points (called nodes) along with a set of directed line segments (called links) between them. A neural network is a parallel distributed information processing structure in the form of a directed graph. 14
ANN’s: an Operational View Input Connections Processing Unit Output Connection Fan Out 15
ANN’s: an Operational View A neural network comprises A set of processing units A state of activation An output function for each unit A pattern of connectivity among units A propagation rule for propagating patterns of activities through the network An activation rule for combining the inputs impinging on a unit with the current state of that unit to produce a new level of activation for the unit A learning rule whereby patterns of connectivity are modified by experience An environment within which the system must operate 16
The Mc. Culloch-Pitts Network. Mc. Culloch and Pitts demonstrated that any logical function can be duplicated by some network of all-ornone neurons referred to as an artificial neural network (ANN). Thus, an artificial neuron can be embedded into a network in such a manner as to fire selectively in response to any given spatial temporal array of firings of other neurons in the ANN. Artificial Neural Networks for Real Neuroscientists: Khurshid Ahmad, Trinity College, 28 Nov 2006 17
The Mc. Culloch-Pitts Network Demonstrates that any logical function can be implemented by some network of neurons. • There are rules governing the excitatory and inhibitory pathways. • All computations are carried out in discrete time intervals. • Each neuron obeys a simple form of a linear threshold law: Neuron fires whenever at least a given (threshold) number of excitatory pathways, and no inhibitory pathways, impinging on it are active from the previous time period. • If a neuron receives a single inhibitory signal from an active neuron, it does not fire. • The connections do not change as a function of experience. Thus the network deals with performance but not learning. 18
The Mc. Culloch-Pitts Network • Computations in a Mc. Culloch-Pitts Network • ‘Each cell is a finite-state machine and accordingly operates in discrete time instants, which are assumed synchronous among all cells. At each moment, a cell is either firing or quiet, the two possible states of the cell’ – firing state produces a pulse and quiet state has no pulse. (Bose and Liang 1996: 21) • ‘Each neural network built from Mc. Culloch-Pitts cells is a finitestate machine is equivalent to and can be simulated by some neural network. ’ (ibid 1996: 23) • ‘The importance of the Mc. Culloch-Pitts model is its applicability in the construction of sequential machines to perform logical operations of any degree of complexity. The model focused on logical and macroscopic cognitive operations, not detailed physiological modelling of the electrical activity of the nervous system. In fact, this deterministic model with its discretization of time and summation rules does not reveal the manner in which biological neurons integrate their inputs. ’ (ibid 1996: 25) 19
The Mc. Culloch-Pitts Network Consider a Mc. Culloch-Pitts network which can act as a minimal model of the sensation of heat from holding a cold object to the skin and then removing it or leaving it on permanently. Each cell has a threshold of TWO, hence fires whenever it receives two excitatory (+) and no inhibitory (-) signals from other cells at a previous time. Artificial Neural Networks for Real Neuroscientists: Khurshid Ahmad, Trinity College, 28 Nov 2006 20
The Mc. Culloch-Pitts Network Heat Sensing Network 1 + 3 + Heat + Hot + B Receptors Cold + + A + 2 + + + 4 Cold 21
The Mc. Culloch-Pitts Network Heat Sensing Network Truth tables of the firing neurons when the cold object contacts the skin and is then removed 1 + + Hot 3 Heat Receptors Cold Cell 1 Cell 2 Cell a Cell b Cell 3 Cell 4 INPUT HIDDEN OUTPUT 1 No Yes No No 2 No No Yes No No No 3 No No No Yes No No 4 No No Yes No + + B + + A + + 2 Time + + 4 Cold 22
The Mc. Culloch-Pitts Network Heat Sensing Network ‘Feel hot’/’Feel cold’ neurons show to create OUTPUT UNIT RESPONSE to given INPUTS that depend ONLY on the previous values. This is known as a TEMPORAL CONTRAST ENHANCEMENT. The absence or presence of a stimulus in the PREVIOUS time cycle plays a major role here. The Mc. Culloch-Pitts Network demonstrates how this ENHANCEMENT can be simulated using an ALL-OR-NONE Network. 23
The Mc. Culloch-Pitts Network Heat Sensing Network Truth tables of the firing neurons for the case when the cold object is left in contact with the skin – a simulation of temporal contrast enhancement Time Cell 1 Cell 2 Cell a Cell b Cell 3 Cell 4 INPUT HIDDE N OUTPU T 1 2 3 24
The Mc. Culloch-Pitts Network Heat Sensing Network Truth tables of the firing neurons for the case when the cold object is left in contact with the skin – a simulation of temporal contrast enhancement 1 + + Hot Time 3 Heat Receptors Cold Cell 2 Cell a Cell b Cell 3 Cell 4 INPUT HIDDEN OUTPUT 1 No Yes No No 2 No Yes No No No 3 No Yes No No Yes + + B + + A + + 2 Cell 1 + + 4 Cold 25
The Mc. Culloch-Pitts Network Memory Models + + A + 1 2 + ++ + + B Three stimulus model 1 + 2 + Permanent Memory model 26
The Mc. Culloch-Pitts Network Memory Models In the permanent memory model, the output neuron has threshold ‘ 1’; neuron 2 fires if the light has ever been on anytime in the past. Levine, D. S. (1991: 16) + 1 + 2 Permanent Memory model 27
The Mc. Culloch-Pitts Network Memory Models Consider, the three stimulus all-or-none neural network. In this network, neuron 1 responds to a light being on. Each of the neurons has threshold ‘ 3’. In the three stimulus model neuron 2 fires after the light has been on three time units in a row. Time A 1 2 B Three stimulus model All connections are unit positive Cell 1 Cell A Cell B Cell 2 1 Yes No No No 2 Yes No 3 Yes Yes No 4 No Yes Yes 28
The Mc. Culloch-Pitts Network Why is a Mc. Culloch-Pitts a FSM? A finite state machine (FSM)is an AUTOMATON. An input string is read from left to right; the machine looks at each symbol in turn. At any time the FSM is in one of many finitely interval states. The state changes after each input symbol is read. The NEW STATE depends (only) on the symbol just read and on the current state. 29
The Mc. Culloch-Pitts Network • ‘The Mc. Culloch-Pitts model, though it uses an oversimplified formulation of neural activity patterns, presages some issues that are still important in current cognitive models. [. . ][Some] Modern connectionist networks contain three types of units or nodes – input units, output units, and hidden units. The input units react to particular data features from the environment […]. The output units generate particular organismic responses […]. The hidden units are neither input nor output units themselves but, via network connections, influence output units to respond to prescribed patterns of input unit firings or activities. [. . ] [This] input-output-hidden trilogy can be seen as analogous to the distinction between sensory neurons, motor neurons, and all other (interneurons) in the brain’ • Levine, Daniel S. (1991: 14 -15) Artificial Neural Networks for Real Neuroscientists: Khurshid Ahmad, Trinity College, 28 Nov 2006 30
The Mc. Culloch-Pitts Network Linear Neuron: Output is the weighted sum of all the inputs; Mc. Culloch-Pitts Neuron: Output is the thresholded value of the weighted sum Input Vector? X = X (1, -20, 4, -2); Weight vector? wji=w(wj 1, wj 2, wj 3, wj 4) wj 1 =[0. 8, 0. 2, -1, -0. 9] x 1 x 2 wj 2 w yj x 3 j wj 4 x 4 0 th input 31
The Mc. Culloch-Pitts Network vj= wjixi; y=f(v); y=0 if v<=0 or y=1 if v>0 Input Vector? X = X (1, -20, 4, -2); Weight vector? wji=w(wj 1, wj 2, wj 3, wj 4) =[0. 8, 0. 2, -1, -0. 9] wj 0=0, x 0=0 x 1 x 2 x 3 x 4 wj 1 wj 2 wj 3 wj 4 j 0 th input yj 32
The Mc. Culloch-Pitts Network Input Vector? X = X (1, -20, 4, -2); Weight vector? w=w(wj 1, wj 2, wj 3, wj 4) =[0. 8, 0. 2, -1, -0. 9] wj 0=0, x 0=0 vj= wjixi; y=f(v); f activation function Linear Neuron: y=v Mc. Culloch Pitts: y=0 if v<=0 or y=1 if v>0 Sigmoid activation function: f(v)=1 /(1+exp(-v)) 33
The Mc. Culloch-Pitts Network What are the circumstance in a neuron with a sigmoidal activation function will act like a Mc. Culloch Pitts network? Large synaptic weights What are the circumstance in a neuron with a sigmoidal activation function will act like a linear neuron? Small synaptic weights 34
The Mc. Culloch-Pitts Network The key outcome of early research in artificial neural networks clearly demonstrated theoretical importance (brain like behaviour and logical basis) and extensive utility (regime switching modes) of threshold behaviour. This behaviour was emulated through the use of the squashing functions and is the basis of many a simulation. 35
- Slides: 35