What Does Representation Explain David Papineau Kings College

  • Slides: 19
Download presentation
What Does Representation Explain? David Papineau King’s College London and Graduate Center City University

What Does Representation Explain? David Papineau King’s College London and Graduate Center City University of New York The Future of Teleosemantics Bielefeld 6 -8 September 2018

Plan 1 Liberality 2 Swampman 3 Determinacy

Plan 1 Liberality 2 Swampman 3 Determinacy

Liberality Some argue that standard teleosemantics is overly liberal, because it counts as “representational”

Liberality Some argue that standard teleosemantics is overly liberal, because it counts as “representational” explanations of behaviour that amount to nothing but simple causal stories. In response, Schulte (following Burge) argues that genuinely representational explanations are robust, showing how representational states and subsequent behaviour are invariantly prompted by distal causes across a range of circumstances. But the liberality charge misses its target. Standard consumer-based teleosemantics doesn’t see representation as offering novel explanations of behaviour, but of success.

Liberality Consumer-based teleosemantics: Vehicle R represents C when a “consumer” responds to state R

Liberality Consumer-based teleosemantics: Vehicle R represents C when a “consumer” responds to state R with some behaviour B that will achieve its biological purpose S if C obtains, and this consumer is served by a “producer” that relatedly has the biological purpose of generating R when condition C obtains. (In effect the consumer “interprets” R as standing proxy for C, in the sense that its behavioural response to R is biologically appropriate if C. )

Liberality In line with this, the attribution of content C to R allows us

Liberality In line with this, the attribution of content C to R allows us to explain not just (1) how R prompts B but also (2) how R in conjunction with C leads to S The essential point of attributions of content is to allow us to keep track of when an organism’s behaviour will and won’t lead to distal success. (If all we were interested in was predicting behaviour, we’d only need the causal role of vehicles R, and could ignore their contents C. )

Liberality Does this really deal with the charge that teleosemantics counts merely causal explanations

Liberality Does this really deal with the charge that teleosemantics counts merely causal explanations as “representational”? Isn’t explaining success in terms of R and C just another causal explanation? But this ignores a pervasive general pattern: many different organisms have many different Rs and Bs that have been designed to respond to some C to achieve some success S; we often invoke this general pattern in ignorance of specific Cs. (“Don’t worry, he always knows where beer can be found”. )

Liberality True, this teleosemantics will be pretty liberal. Magnetosomes might be out, for lack

Liberality True, this teleosemantics will be pretty liberal. Magnetosomes might be out, for lack of sufficient producerconsumer structure, but hormones like vasopressin will be in. I have no big objection to adding more requirements, such as robustness (Schulte, Burge, Shea) or learning (Dretske) to forestall intuitive objections (as robustness and learning will accompany nearly all cases of teleosemantic representation). But the distinctive explanatory contribution of representation still lies in the truth-success link; after all, robustness and learning are characteristic of many different processes that have nothing to do with representation.

Swampman I once argued that swampman is no more an objection to teleosemantics than

Swampman I once argued that swampman is no more an objection to teleosemantics than XYX is an objection to water = H 2 O: as an a posteriori reduction (of representational properties to certain selectional properties) it is immune to merely conceivable (but impossible) counterexamples. Schulte has objected to the analogy with water = H 2 O: actual coextensiveness does not suffice for reduction; after all, water odourless & colourless & tasteless but we don’t identify water with the r. h. s. ; so, even if representational properties are actually coextensive with certain selectional properties, that doesn’t suffice to identify them. It’s only because the chemical structure explains the common properties of water that we accept water = H 2 O. However, not all kinds are eternal kinds like water, with the shared properties of the instances being explained by some intrinsic property (H 2 O) that the instances share.

Swampman A kind is any category whose instances share a plurality of properties (eg

Swampman A kind is any category whose instances share a plurality of properties (eg water, horse but not red). These multiple correlations between the instances’ properties need a common cause. With eternal kinds this will be a common instrinsic property. With historical kinds (Catholic masses, horses) this will be a common origin. And with functional kinds (eg aerial insectivores) this will be a common selective pressure. In all these cases, we regard the “super-explanatory” common feature of the instances as the essence of the kind and so identify them. Water = H 2 O; horse = being descended from the original horses; aerial insectivore = being selected for catching insects.

Swampman Representation is a functional kind. Instances of representational systems (tend to) share many

Swampman Representation is a functional kind. Instances of representational systems (tend to) share many features: a consumer that treats some internal R as proxy for some C that fixes the success of resulting behaviour B; a producer that gears R to the presence of C; and moreover does this robustly across different circumstances; learning fine-tunes the sensitivity of producer and consumer. What explains the shared features of such systems is their common selective provenance. They have all been designed to optimally enable organisms to gear their behaviour to distal circumstances C. So this common cause is the essence of representation. Represents that C = was selected to gear behaviour to C. And, as with all such a posteriori identities, merely conceivable counterexamples like swampman carry no weight.

Determinacy Does the frog represent moving black dot, or fly? (Does the magnetosome represent

Determinacy Does the frog represent moving black dot, or fly? (Does the magnetosome represent north, or oxygen-free? . . . ) Millikan says the content is the condition required for biological success – so fly (or food of some such), not moving black dot. Neander argues that even if we stick to what the frog was selected to respond to, we still have a choice between moving black dot/fly/food/nutrition giver/reproduction enhancer. She says this is a special case of the “concertina” of functions which attaches to any selected trait. (The functions of an antelope gene can be to alter hemoglobin structure, and so increase oxygen uptake, and so ensure high-altitude survival, and so enhance reproduction. )

Determinacy Neander appeals to Cummins’ “functional analysis” and argues that the function specific to

Determinacy Neander appeals to Cummins’ “functional analysis” and argues that the function specific to a trait T is its most immediate effect at the lowest level of analysis where T is an unanalyzed component. (It’s not T’s fault if more distal functions are unfulfilled due to other traits not fulfilling their specific functions. ) Neander infers from this that the frog’s detection device has the function of detecting moving black dot.

Determinacy I agree with the appeal to Cummins, but not to the way Neander

Determinacy I agree with the appeal to Cummins, but not to the way Neander uses it. Take the signal that mediates between the frog’s eye and the tongue snapping. The standard Millikanian view is that we should look at the condition required for success. Neander objects that this could be any of fly/food/nutrition giver/reproduction enhancer.

Determinacy True enough. But now let’s apply Neander’s own functional analysis, but (unlike her)

Determinacy True enough. But now let’s apply Neander’s own functional analysis, but (unlike her) let consumptionsuccess conditions fix content. There’s the tongue-snapping system (which along with the throat, stomach, etc is part of the nutrition system). What its job? I say to catch flies/food. (It’s not always its fault if nutrition or reproduction don’t ensue, but it is when that’s because a BB pellet is caught. ) Now decompose the tongue-snapping system into producer, signal, consumer. What’s the signal’s job? To prompt snapping when a fly/food is present. (Note how I am here assuming that a system can malfunction, even though healthy, simply because of unhelpful environments. )

Determinacy Cf Millikan: “the least detailed Normal explanation of the specific type of behaviour

Determinacy Cf Millikan: “the least detailed Normal explanation of the specific type of behaviour prompted by R”. Cf Shea: the correlation that best explain satisfaction of task functions, and more particularly that illuminates the specific role played by different representations. The “specific” in their accounts might work like my appeal to Cummins/Neander and to “specific function”.

Determinacy Possible objection (Cao, Shea): if we apply my line to subpersonal representation, we’ll

Determinacy Possible objection (Cao, Shea): if we apply my line to subpersonal representation, we’ll end up with representational sub-systems whose states will represent other brain states, and not worldly conditions. I agree we get this consequence, and don’t see anything wrong with it. Take the dorsal visual system. We can regard this as a producer of representations that are consumed by the motor system in directing reaching and graspings. A classic teleosemantic case: the Rs represent the shape and location of external 3 -D objects required for the success of the reachings and graspings.

Determinacy But now analyse the dorsal system. It has subsystems that (a) respond to

Determinacy But now analyse the dorsal system. It has subsystems that (a) respond to V 1 luminance and chromatic discontinuities and produce edge-relevant representations (b) respond to stereopsis and produce distance-relevant representations. . A standard view is that these states represent external edges and distances. But that’s not how it comes out on my account. For me, the outputs of these subsystems represent facts about primary visual cortex activation. The subsystems have done their job if they have those internal facts right. It’s not necessarily their fault if their consumers (still within the larger dorsal system) end up wrong about real edges at certain distances (say because of some trick illusion or because the light was funny).

Determinacy That seems fine to me. Can teleosemantics allow that consumers have the function

Determinacy That seems fine to me. Can teleosemantics allow that consumers have the function of producing representations? Sure—provided that their outputs are in term consumed. . . by a consumer with a non-representational function (eg the motor system in the case at hand). Can the internal sub-representations misrepresent primary visual cortex activity? Sure—consider say cases where the “edge-detectors” fire due to random neural activity, or fail to fire because of sensory adaptation, thus misleading their consumers about V 1 activity.

Classic teleosemantics is in perfectly fine shape. THE END

Classic teleosemantics is in perfectly fine shape. THE END