Lomonosov Moscow State University Cognitive Seminar 6102004 Dynamic

  • Slides: 92
Download presentation
Lomonosov Moscow State University Cognitive Seminar, 6/10/2004 Dynamic attention and predictive tracking Todd S.

Lomonosov Moscow State University Cognitive Seminar, 6/10/2004 Dynamic attention and predictive tracking Todd S. Horowitz Visual Attention Laboratory Brigham & Women’s Hospital Harvard Medical School

Sarah Klieger lab photo Jennifer Di. Mase George Alvarez David Fencsik Randy Birnkrant Jeremy

Sarah Klieger lab photo Jennifer Di. Mase George Alvarez David Fencsik Randy Birnkrant Jeremy Wolfe Helga Arsenio Linda Tran (not pictured)

Multi-element visual tracking task (MVT) • Devised by Pylyshyn & Storm (1988) • Method

Multi-element visual tracking task (MVT) • Devised by Pylyshyn & Storm (1988) • Method for studying attention to dynamic objects

Multi-element visual tracking task (MVT) • Present several (8 -10) identical objects • Cue

Multi-element visual tracking task (MVT) • Present several (8 -10) identical objects • Cue a subset (4 -5) as targets • All objects move independently for several seconds • Observers asked to indicate which objects were cued

Demo demo mvt 4

Demo demo mvt 4

Interesting facts about MVT • Can track 4 -5 objects (Pylyshyn & Storm, 1988)

Interesting facts about MVT • Can track 4 -5 objects (Pylyshyn & Storm, 1988) • Tracking survives occlusion (Scholl & Pylyshyn, 1999) • Involves parietal cortex (Culham, et al, 1998) • “Clues to objecthood” - Scholl

Accounts of MVT performance • • FINSTs (Pylyshyn, 1989) Virtual polygons (Yantis, 1992) Object

Accounts of MVT performance • • FINSTs (Pylyshyn, 1989) Virtual polygons (Yantis, 1992) Object files (Kahneman & Treisman, 1984) “Object-based attention”

These are all (partially) wrong • • FINSTs (Pylyshyn, 1989) Virtual polygons (Yantis, 1992)

These are all (partially) wrong • • FINSTs (Pylyshyn, 1989) Virtual polygons (Yantis, 1992) Object files (Kahneman & Treisman, 1984) “Object-based attention”

Common assumptions • Low level (1 st order) motion system updates higher-level representation –

Common assumptions • Low level (1 st order) motion system updates higher-level representation – FINST – Object file – Virtual polygon • Continuous computation in the present

Overview • MVT and attention • Tracking across the gap • Tracking trajectories

Overview • MVT and attention • Tracking across the gap • Tracking trajectories

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items (Sears & Pylyshyn) • Hypothesis: MVT is mutually exclusive with other attentional tasks George Alvarez, Helga Arsenio, Jennifer Di. Mase, Jeremy Wolfe

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items (Sears & Pylyshyn) • Hypothesis: MVT is mutually exclusive with visual search

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items

MVT and attention • Clearly a limited-capacity resource • Attentional priority to tracked items (Sears & Pylyshyn) • Hypothesis: MVT is mutually exclusive with visual search • Method: Attentional Operating Characteristic (AOC)

AOC Theory

AOC Theory

General methods - normalization • Single task = 100 • Chance = 0 •

General methods - normalization • Single task = 100 • Chance = 0 • Dual task performance scaled to distance between single task performance and chance

General methods - staircases • • Up step (following error) = 2 x down

General methods - staircases • • Up step (following error) = 2 x down step Asymptote = 66. 7% accuracy Staircase runs until 20 reversals Asymptote computed on last 10 reversals

General methods - tracking • 10 disks • 5 disks cued • Speed =

General methods - tracking • 10 disks • 5 disks cued • Speed = 9°/s

AOC Theory

AOC Theory

AOC reality • Tasks can interfere at multiple levels • Interference can occur even

AOC reality • Tasks can interfere at multiple levels • Interference can occur even when resource of interest (here visual attention) is not shared • How “independent” are two attentiondemanding tasks which do not share visual attention resources?

Gold standard: tracking vs. tone detection

Gold standard: tracking vs. tone detection

Gold standard method • Tracking – Duration = 6 s • Tone duration –

Gold standard method • Tracking – Duration = 6 s • Tone duration – – – – 10 600 Hz tones Onset t = 1 s ITI = 400 ms Distractor duration = 200 ms Task: target tone longer or shorter? Target duration staircased ( 31 ms) Dual task priority varied N = 10

Gold standard AOC

Gold standard AOC

Tracking + search method • Tracking – Duration = 5 s • Search N=9

Tracking + search method • Tracking – Duration = 5 s • Search N=9 – 2 AFC “E” vs. “N” – Distractors = rest of alphabet – Set size = 5 – Duration staircased (mean = 156 ms) – Onset = 2 s

Tracking + search method

Tracking + search method

Tracking + search AOC

Tracking + search AOC

Tracking + search AOC

Tracking + search AOC

Does tracked status matter? L T L L

Does tracked status matter? L T L L

method • Tracking – Duration = 3 s • Search N=9 – 2 AFC

method • Tracking – Duration = 3 s • Search N=9 – 2 AFC left- or right-pointing T – Distractors = rotated Ls – Set size = 5 – Duration staircased (mean = 218 ms) – Onset = 1 s

search inside tracked set T T L L L

search inside tracked set T T L L L

search outside tracked set L L L T T L L

search outside tracked set L L L T T L L

mixed search inside tracked set search outside tracked set blocked

mixed search inside tracked set search outside tracked set blocked

inside vs. outside AOC

inside vs. outside AOC

Does spatial separation matter? P E H F V

Does spatial separation matter? P E H F V

method • Tracking – Duration = 5 s • Search – 2 AFC “E”

method • Tracking – Duration = 5 s • Search – 2 AFC “E” vs. “N” – Distractors = rest of alphabet – Set size = 5 – Duration = 200 ms – Onset = 2 s N=9

spatial separation AOC

spatial separation AOC

search v track summary

search v track summary

MVT and search • • Clearly not mutually exclusive Not pure independence Close to

MVT and search • • Clearly not mutually exclusive Not pure independence Close to gold standard MVT and search use independent resources?

Two explanations • Separate attention mechanisms • Time sharing

Two explanations • Separate attention mechanisms • Time sharing

Predictions of time sharing hypothesis • Should be able to leave tracking task for

Predictions of time sharing hypothesis • Should be able to leave tracking task for significant periods with no loss of performance • Should be able to do something in that interval

Track across the gap method

Track across the gap method

Track across the gap method • • Track 4 of 8 disks Speed =

Track across the gap method • • Track 4 of 8 disks Speed = 6°/s Blank interval onset = 1, 2, or 3 s Trajectory variability: 0°, 15°, 30°, or 45° every 20 ms • Blank interval duration staircased (dv) • N = 11

track across the gap asymptotes

track across the gap asymptotes

Predictions of time sharing hypothesis • Should be able to leave tracking task for

Predictions of time sharing hypothesis • Should be able to leave tracking task for significant periods with no loss of performance (see also Yin & Thornton, 1999) - confirmed • Should be able to do something (e. g. search) in that interval

search during gap method • AOC method • Tracking task same as before •

search during gap method • AOC method • Tracking task same as before • Search task in blank interval – Target = rotated T – Distractors = rotated Ls – Set size = 8 – 4 AFC: Report orientation of T • Duration of search task staircased (326 ms)

search during gap AOC

search during gap AOC

Predictions of time sharing hypothesis • Should be able to leave tracking task for

Predictions of time sharing hypothesis • Should be able to leave tracking task for significant periods of time with no loss of performance (see also Yin & Thornton, 1999) - confirmed • Should be able to do something (e. g. search) in that interval - confirmed

Summary • MVT and visual search can be performed independently in the same trial

Summary • MVT and visual search can be performed independently in the same trial • May support independent “visual attention” mechanisms • May support time-sharing

Summary • Tracking across the gap data support time sharing • Tracking across the

Summary • Tracking across the gap data support time sharing • Tracking across the gap data raise new questions

What is the mechanism? • Not a continuous computation in the present • Not

What is the mechanism? • Not a continuous computation in the present • Not first order motion mechanisms • Not apparent motion Randall Birnkrant, Jennifer Di. Mase, Sarah Klieger, Linda Tran, Jeremy Wolfe

None of these theories fit • FINSTs (Pylyshyn, 1989) • Virtual polygons (Yantis, 1992)

None of these theories fit • FINSTs (Pylyshyn, 1989) • Virtual polygons (Yantis, 1992) • Object files (Kahneman & Treisman, 1984)

What is the mechanism? • Some sort of amodal perception? (e. g. tracking behind

What is the mechanism? • Some sort of amodal perception? (e. g. tracking behind occluders, Scholl & Pylyshyn, 1999) • … but there are no occlusion cues!

Scholl & Pylyshyn, 1999

Scholl & Pylyshyn, 1999

Maybe the gap is just an impoverished occlusion stimulus • No occlusion/disocclusion cues •

Maybe the gap is just an impoverished occlusion stimulus • No occlusion/disocclusion cues • Synchronous disappearance

Predictions of impoverished occlusion hypothesis • Occlusion cues will improve performance • Asynchronous disappearance

Predictions of impoverished occlusion hypothesis • Occlusion cues will improve performance • Asynchronous disappearance will improve performance

Method • • Track for 5 s Speed = 12°/s Track 4 of 10

Method • • Track for 5 s Speed = 12°/s Track 4 of 10 disks Independent variables (blocked) – Gap duration: 107 ms, 307 ms, 507 ms – Occlusion cues absent, present – Disappearances synchronous, asynchronous • N = 15

synchronous disappearance items invisible but continue to move all items reappear simultaneously

synchronous disappearance items invisible but continue to move all items reappear simultaneously

synchronous disappearance + occlusion disocclusion begins

synchronous disappearance + occlusion disocclusion begins

Occlusion/Disocclusion

Occlusion/Disocclusion

asynchronous disappearance item reappears one item at a time disappears but continues to move

asynchronous disappearance item reappears one item at a time disappears but continues to move

asynchronous disappearance + occlusion one item at a time begins to be occluded. .

asynchronous disappearance + occlusion one item at a time begins to be occluded. . . moves while invisible. . . then disoccludes

comparing cue types

comparing cue types

Occlusion hypothesis fails • Occlusion cues don’t help • Asynchronous disappearance doesn’t help

Occlusion hypothesis fails • Occlusion cues don’t help • Asynchronous disappearance doesn’t help

Method • • Track for 5 s Speed = 12°/s Synchronous condition only Independent

Method • • Track for 5 s Speed = 12°/s Synchronous condition only Independent variables (blocked) – Gap duration: 107 ms, 307 ms, 507 ms – Occlusion cues absent, present – Track 4, 5, or 6 of 10 disks • N = 11

comparing cue types

comparing cue types

Occlusion hypothesis fails • Occlusion cues don’t help • Occlusion cues can actually harm

Occlusion hypothesis fails • Occlusion cues don’t help • Occlusion cues can actually harm performance • Asynchronous disappearance doesn’t help

What is the mechanism? • • Not a continuous computation in the present Not

What is the mechanism? • • Not a continuous computation in the present Not first order motion mechanisms Not apparent motion Not amodal perception (occlusion)

How do we reacquire targets? • remember last location (backward) • store trajectory (forward)

How do we reacquire targets? • remember last location (backward) • store trajectory (forward) David Fencsik, Sarah Klieger, Jeremy Wolfe

location-matching account Memorized pre-gap target location. Nearest to memorized location: identified as target. First

location-matching account Memorized pre-gap target location. Nearest to memorized location: identified as target. First Post-Gap Frame

trajectory-matching account Memorized pre-gap target trajectory. On target trajectory: identified as target. First Post-Gap

trajectory-matching account Memorized pre-gap target trajectory. On target trajectory: identified as target. First Post-Gap Frame

Shifting post-gap location Last visible pre-gap location Expected post-gap location +1 0 -1 opposite

Shifting post-gap location Last visible pre-gap location Expected post-gap location +1 0 -1 opposite of expected location = Stimulus trajectory

shifting post-gap location predictions

shifting post-gap location predictions

Shifting post-gap location methods • • • track for 5 s speed = 8°/s

Shifting post-gap location methods • • • track for 5 s speed = 8°/s track 5 of 10 disks gap duration = 300 ms post-gap location condition blocked stimuli continue to move after gap

shifting post-gap location

shifting post-gap location

Location vs. trajectory-matching • support for location-matching – see also Keane & Pylyshyn 2003;

Location vs. trajectory-matching • support for location-matching – see also Keane & Pylyshyn 2003; 2004 • but advantage for -1 is suspicious

Location vs. trajectory-matching + time +1. 0 + time +1. 5 time +2. 0

Location vs. trajectory-matching + time +1. 0 + time +1. 5 time +2. 0 +

shift & stop methods • • • track for 4 -6 s speed =

shift & stop methods • • • track for 4 -6 s speed = 9°/s track 2 or 5 of 10 disks gap duration = 300 ms post-gap location condition blocked stimuli stop after gap

moving vs. static after gap

moving vs. static after gap

moving vs. static after gap

moving vs. static after gap

2 vs. 5 targets

2 vs. 5 targets

Location vs. trajectory-matching • support for location-matching • However. . . – conditions are

Location vs. trajectory-matching • support for location-matching • However. . . – conditions are blocked – observers might see their task not as tracking across the gap, but learning which condition they’re in – might not tell us about normal target recovery

Location vs. trajectory-matching • can subjects use trajectory information? • always have items move

Location vs. trajectory-matching • can subjects use trajectory information? • always have items move during gap • vary whether trajectory information is available or not

moving condition invisible motion

moving condition invisible motion

static condition invisible motion

static condition invisible motion

manipulate pre-gap information methods • • track for 4 s speed = 9°/s track

manipulate pre-gap information methods • • track for 4 s speed = 9°/s track 1 to 4 of 10 disks gap duration = 300 ms

manipulate pre-gap information

manipulate pre-gap information

manipulate pre-gap information

manipulate pre-gap information

Location vs. trajectory-matching • observers can use trajectory information • unlimited (or at least

Location vs. trajectory-matching • observers can use trajectory information • unlimited (or at least > 4) capacity for locations • smaller (1 or 2) capacity for trajectories

Conclusions • Flexible attention system allows rapid switching between MVT and other attention -demanding

Conclusions • Flexible attention system allows rapid switching between MVT and other attention -demanding tasks • Some representation allows recovery of tracked targets after 300 -400 ms gaps • This representation includes location and trajectory information

Speculation • MVT reveals two mechanisms, rather than just one • Frequently (but perhaps

Speculation • MVT reveals two mechanisms, rather than just one • Frequently (but perhaps not continuously) updated location store • Attention to trajectories