Distribution A Approved for Public Release ASC Case
Distribution A. Approved for Public Release, ASC Case No. ASC 03 -2048, 8/1/03 Adaptive SAR ATR Problem Set Adapt. SAPS Ver. 1. 0 Tim Ross, AFRL/SNAR Angela Wise, AFRL COMPASE Center, JE Sverdrup Donna Fitzgerald, AFRL SDMS, Veridian 1
Outline • • • Acknowledgements Objective Background Key organizations / Personnel Configuration Control Testing Data Tools Reporting 2
Acknowledgements • Steve Welby, DARPA – Provided impetus for this problem set in comments during a SPIE ‘ 03 panel discussion, particularly that static problems have become less relevant to the problems faced by today’s military • Ed Zelnio, AFRL/SNA – Originated the idea of wrapping static problem sets with procedures that create dynamic problem sets • Lannie Hudson, AFRL COMPASE Center, JE Sverdrup – Developed code for clutter chip generation, display, and analysis • Mike Bryant, AFRL/SNA – Principally responsible for the original MSTAR data public release, suggested the exploitation emulation component, and allowed us to use chip database/server code he developed at Wright State University • Ron Dilsavor, AFRL/SNA – Provided guidance on methods for SAR image manipulation, inclusion of synthetic effects, and constructive criticism of MOPs • Mark Minardi, AFRL/SNA – Contributed area-under-ROC curve algorithm and MOP guidance generally • ATRWG / DUSD – Developed guidelines for Problem Set definition which were considered here • Capt. Dave Parker, AFIT – Improvements in code and documentation based on Adapt. SAPS beta testing 3
Objective • Adapt. SAPS 1. 0 – Foster basic research in adaptive algorithms for target detection in SAR imagery – Encourage consideration of self-assessed confidence – Support OC conscious development and testing • Future Adapt. SAPS – Provide milestones for progress in adaptive system technology as applied to SAR exploitation – Provide standard benchmarking problem set for comparing adaptive systems from different developers Please provide feedback on how we can better meet these objectives. 4
Background • Problem / Data Sets – ATRWG Standard Problem Sets • Not Public Released • July 1994 – FLIR – http: //www. atrwg. vdl. afrl. af. mil/committees/database/standard_data_sets. html • Recent – SAR and Fusion – https: //restricted. atrwg. vdl. afrl. af. mil/problemset/ – MSTAR 1997+ Public Data Set • • SAR SDMS Public Released 150+ papers using MSTAR public data – Other Data Sets • 3 D Challenge Problem (Aerosense 2003) • SDMS - - https: //www. mbvlab. wpafb. af. mil/public/sdms • David Aha’s “Data Repository” for Machine Learning list – http: //www. aic. nrl. navy. mil/~aha/research/machine-learning. html 5
Background • Technical Need – SPIE’ 03 Panel / Mr. Steve Welby, DARPA We no longer face static problems. – Varied nature of the problem • Well demonstrated SAR ATR Operating Condition (OC) sensitivities – Difficulties of obtaining training data for all conditions of interest 6
Background • Desired attributes of Adaptive Systems – Perform some function initially • Adapt. SAPS 1. 0 : Target detection in SAR imagery – Performs better with experience – Knows how well it’s doing (accurate self assessed confidence) – May or may not have an initial batch training set • Adapt. SAPS 1. 0 : Initial batch training set provided – Experience may or may not have supervision • Adapt. SAPS 1. 0 : Supervision provided 7
Background • Examples of things a system may want to adapt to: – greater resolution in aspect – target variability (versions and types) – type and difficulty of clutter and confuser images – prior probabilities of targets and nontargets –. . . 8
Key Organizations / Personnel • Coordination – AFRL/SNA • Tim Ross, AFRL/SNAR – timothy. ross@wpafb. af. mil • Problem Set Definition – AFRL COMPASE Center • Angela Wise, JE Sverdrup – Angela. Wise@wpafb. af. mil • Problem Set Distribution – AFRL SDMS • Donna Fitzgerald, Veridian – sdms_help@mbvlab. wpafb. af. mil • Feedback / recommendations welcome 9
Configuration Control • Security – All elements of this problem set are approved for public release • Configuration control plan – The Problem Set will be managed by AFRL, based on inputs provided at the Algorithms for Synthetic Aperture Radar Imagery Conference of the SPIE International Symposium on Defense and Security (formerly Aero. Sense) • 1. 0 - completely unsequestered • Future - sequestered data and OC dimensions – The Problem Set will be distributed by AFRL SDMS 10
Testing • Methodology • Measures of Performance (MOPs) 11
Methodology - Key Concepts • • Image chips (of targets and non-targets) Missions (series of Image Chips of a common character) SUT - System Under Test (your Adaptive target detector) Adapt. SAPS main program calls – SUT Initialization – then loops through Missions • then loops through Image Chips calling – SUT Exploit (passing a single test image chip to the SUT) – SUT Adapt (passing target truth for test chip to the SUT) – then computes performance measures Test Chip Selection Adapt. SAPS Performance Measure Target? Image Chips Image Chip Truth SUT Mission i 12
Methodology • Initial Batch Training Set is Provided • System-Under-Test (SUT) is taken to a “deployment” – Adapt. SAPS initializes SUT – Adapt. SAPS “flys” a sequence of Missions • For each Mission – While images remain » Adapt. SAPS makes a test image without Target truth available to SUT » SUT analyzes image » SUT reports probability that it contains a target (Prob. Tgt) » Adapt. SAPS then provides to SUT the Target “Truth” for the previous image - Simulating the results of human exploitation » SUT adapts – Adapt. SAPS reports Measures of Performance 13
Methodology • SUT may use for exploitation and adaptation whatever information that is provided to the SUT when called by the main Adapt. SAPS program (run_missions. m), i. e. , – Mission Number • This does not include the information related to the definition of missions (e. g. , prior probabilities) • The SUT is only being informed via the Mission Number that the mission has changed – Filename for the test image • Which will be …test_image. 000 throughout • This does include the information in the header of the test image. Note that all target, object, etc. fields have been removed from the test image header • The SUT should NOT use prior knowledge about the MSTAR data collection to then use site, time, lat, long, etc. from the header to inform exploitation or adaptation. – True Target/Nontarget • This is provided at the Target / Nontarget level only; i. e. , type, serial number, aspect, …, are NOT provided. • This is provided to the SUT only after the SUT has made its estimate of Target/Nontarget for that chip. 14
Methodology (con’t) Mission Definition SUT Adapt. SAPS MSTAR Data MOPs • Adapt. SAPS Inputs / SUT Outputs – SUT Result (Prob. Tgt) • Adapt. SAPS Outputs / SUT Inputs – Initial Batch Training Set (offline) – Test Image – Truth (Target/Nontarget) 15
MOPs • Scoring methods – Truth from headers, Reports from SUT – All testing at chip level - no issues concerning location accuracy or truth -to-report “association” problems – Un-weighted Averaging used, since we already control the population mix in each Mission • Generally interested in – adaptation efficiency • • learning with fewer sequential data points, taking fewer CPU cycles to perform each update, limiting growth of required memory, . . . – robustness • adapting to more and more extreme OCs – post-adaptation accuracy • Pd/FAR/Pid and • self-confidence accuracy 16
MOPs • The following measure is proposed as something that encourages the desired behavior, but does so imperfectly. We encourage suggestions for better or simpler measures. Please provide feedback on how we can improve MOPs. 17
MOPs • From one perspective, a given set of test data and a given SUT produce two distributions on the reported Prob. Tgt - one for target test data and one for nontarget test data • As is usual, we desire that the two distributions be well separated. – This might be measured as • • probabilistic distance measure (e. g. , Bhattacharyya distance) Pfa at a fixed Pd Pfa or (1 -Pd) when they are equal area under the ROC curve (as we do here) • We also desire that the reported Prob. Tgt be accurate – i. e. , of all the reports with confidence of Prob. Tgt, the fraction of those that are actually targets should be about Prob. Tgt – This might be measured as • difference between actual and reported probabilities (as we do here) • mutual information between reported probabilities and correctness of decisions (see references in notes) 18
MOP-Adaptation (MOPA) • Reported for – the overall experiment, – each Mission, and – each quartile of each mission • Objective is to encourage the SUT to – have accurate self-assessed confidence – differentiate targets from nontargets 19
MOPA (con’t) • MOPA = (E + (1 -D))/2 • Error (E) – Equal number of test instances are placed in each of 5 bins – E is the average across the 5 bins of the difference (RMS) between average reported Prob. Tgt in the bin and actual target fraction in the bin • Discrimination (D) – Area under the Pd - Pfa ROC curve 20
Worse 1: 1 True Prob. Tgt Error (E) True Prob. Tgt MOPA (con’t) 1: 1 Reported Prob. Tgt Pd Discrimination (D) Pd Reported Prob. Tgt Better Pfa 21
MOPA (con’t) • MOPA – Smaller is better – Should always be in [0, 1] – If a score set does not include both target and nontarget entries then D is undefined and therefore MOPA is undefined – If a score set does not have at least one entry per bin then E is undefined and therefore MOPA is undefined – Error term includes a sampling bias, so will vary at small sample sizes. Comparisons should only be made between similar sample sizes. – Note that the current MOPs depend solely on the SUT reported score (est. Target. Prob) and do not use the SUT’s Target/Nontarget decision (est. Tgt. Nontgt) 22
Data - Outline • Data Characterization – References – Operating Condition (OC) Dimensions • Initial Batch Training Data • Adaptive Test / Train Data - “Missions” – Menu Options – Specific Missions 23
Data • References – SDMS: https: //www. mbvlab. wpafb. af. mil/public/sdms/ datasets/mstar/overview. htm – Related publications for the MSTAR public data (in notes section below) Please provide additional citations. Please provide further insights on data characterization. 24
Target OCs - Candidate Dimensions • Target – – – – SN Version Articulation Configuration Type Class Dimensions Prior Probabilities See readme. txt for actual database fields • Sensing – Synthetic Noise – Depression • Environment – Synthetic Shadow – Collection 25
Target Data Characterization This is a count of the number of target instances across the three public MSTAR target CDs (Targets, Mixed Targets, & T-72 Variants) 26
Target Data Characterization This is a summary of the OCs present on the three public MSTAR target CDs (Targets, Mixed Targets, T 72 Variants) N=Nominal A=Articulation (t=turret, g=gun, h=hatch, f=firing rack, s=sight port, d=dish) C=Configuration (f=fuel barrels, r=reactive armor) V=Version Variant 27
T 72 Version Summary • Version 3 – A 32 • Version 2 – A 62, A 63, A 64, s 7 • Version 1 – 132, 812, A 04, A 05, A 06, A 07, A 10 28
Target Sets • T 72 Nominal – T 72 s same version and config as 132 • T 72 EOC – all T 72 s – equal priors across versions • Tracked Types – all tracked types except Bulldozer (D 7) • T 62, T 72, BMP 1, 2 S 1, ZSU – equal prior across types • Combat Types – Tracked Types plus all wheeled types except Truck (Zil) • T 62, T 72, BMP 1, 2 S 1, ZSU, BTRs and BRDMs – equal prior across types 29
Confuser Sets • • None Slicy and Truck (Zil 131) Slicy, Truck, and Bulldozer (D 7) 30
Clutter Candidate OC Dimensions • Imaging geometry (depression, squint, . . . ) – Treat the same as Target OCs • Clutter Features – See Clutter Characterization • Confusers – Candidates include Slicy, D 7, Zil Truck See readme. txt for actual database fields 31
Clutter Characterization • • 1160 chips from MSTAR Public Release clutter, each 128 x 128 pixels Identifiers: – FS image, Row, Column, Chip name • Features – Clutter type – Score • – – – – – as assigned by a nominal ATR prescreener Mean Variance Standard Deviation RMS Skewness Kurtosis Maximum Total Integral - sum of pixel magnitudes across entire chip Zero-Valued Points 32
Clutter Type • Features – Natural or Cultural – Isolated, Edge / Corner, or Homogenous Surround • Clutter type (C 1 - C 6) C 1 = Cultural Isolated Object FA (small building, vehicle, …); 345 chips C 2 = Natural Isolated Object FA (tree, rock, …); 310 chips C 3 = Cultural Edge / Corner FA (things from fences, roads, …); 189 chips C 4 = Natural Edge / Corner FA (things from tree lines, streams, …); 73 chips C 5 = Cultural Homogenous Area FA (on a large building, parking lot, …); 122 chips C 6 = Natural Homogenous Area FA (on a grass field, forest canopy, …); 120 chips 33
Clutter Type Chip Counts Score Averages 34
Clutter Chip Examples hb 06188 _814_483 hb 06188 _631_445 hb 06264 _722_360 C 1 – Cultural Isolated Object FA C 3 – Cultural Edge/Corner FA C 5 – Cultural Homogeneous FA hb 06204 _729_817 hb 06270 _124_1503 C 2 – Natural Isolated Object FA C 4 – Natural Edge/Corner FA hb 06183 _328_269 C 6 – Natural Homogeneous FA 35
Clutter Chip Examples Score = 2. 67 Score = 0. 46 hb 06242 _1257_905 hb 06159 _1122_251 Score = - 0. 47 hb 06252 _486_832 Most target-like Score = - 2. 50 hb 06204 _729_817 Least target-like Score = 2. 63 Score = 0. 46 Score = - 0. 47 hb 06258 _537_1266 hb 06197 _183_1374 hb 06161 _945_1169 Score = - 2. 58 hb 06188 _631_445 36
Clutter Sets A - Type C 6 clutter; 120 chips B - Type C 3, C 4, and C 5 clutter; 384 chips C - Type C 2 clutter; 310 chips D - Type C 1 clutter; 345 chips 37
Initial Batch Training Data • Target: – T 72, SN 132, – 17 deg dep. , – 72 chips - randomly selected – Defined by a list of image numbers • Clutter: See readme. txt for actual image lists – Set A clutter chips – 17 deg dep. , – 72 chips - randomly selected – Defined by a list of image numbers with row/column of chip center 38
Mission Menu Options • Mission Definition – Target Set – Clutter Set Nontargets – Confuser Set – Prior probabilities (Tgt, Confuser, Clutter) – Total number of images in the mission 39
Target / Nontarget • Adapt. SAPS Version 1. 0 encourages consideration of one particular definition of “target” – i. e. , has Target / Non. Target coded in create_DB. m, future versions may make this easier to change. • The following are only used as Targets – 2 s 1_gun, bmp 2_tank, brdm 2_truck, btr 60_transport, btr 70_transport, t 62_tank, t 72_tank, zsu 23 -4_gun • The following are only used as Nontargets – d 7_bulldozer, clutter, slicey, zil 131_truck 40
Mission Menu Options (con’t) • Prior Probabilities – Numbers are probabilities of targets, confusers, clutter chips – e. g. , 0. 4, 0. 1, 0. 5 – Prior probabilities are: • Target - first number (e. g. , 0. 4) • Nontarget - sum of second and third no. (e. g. , 0. 6) • Total Number of Images per Mission – Since quartiles are scored, multiples of 4 are convenient – Basic missions all have 120 images, but work with larger (thousands even) of images are also of interest 41
Basic Missions 42
Missions • Notes for all Missions – We include 15 -17 deg. Depression and exclude >17 throughout – We don’t have Articulation Variants at the included depression angles – We’re assuming that the Collection is not a significant OC – The offline training data is not consider to be a “mission”, but Mission 1 (with similar OCs) is a Mission. Adaptation is desired on Mission 1. – The pre-defined Missions (1 -10) are of interest, but a given user’s approach may suggest other, more appropriate, missions; that is of interest also. • e. g. , a particular approach may focus on version variants only, or use many more images per mission, or. . . 43
Methodology - Tools Mission Definition SUT Adapt. SAPS MSTAR Data MOPs See readme. txt for tool installation, set-up, and execution 44
Adapt. SAPS Consists of … • • This briefing The MSTAR Public Release data – Available from SDMS at https: //www. mbvlab. wpafb. af. mil/public/sdms/datasets/mstar/overview. htm – Includes MSTAR Clutter, MSTAR Targets, MSTAR/IU T-72 Variants, and MSTAR/IU Mixed Targets • Batch Training Set – Defined in readme. txt, lists target and clutter chip identifiers • Tools – Documentation in readme. txt – Installation and setup • • Clutter chip generation from full scene clutter images Database generation for target, confuser, and clutter Operating Conditions Mission Definition - Matlab script for generating image lists from the parameters for enumerated missions Spreadsheet with Clutter Characterization information – Clutter. xls – Execution, including • • Main (run_missions. m) Example SUT (eg. Sut. Init. m, eg. Sut. Exploit. m, eg. Sut. Adapt. m) Server of test images and truth (sar. Oracle. m) Performance Measures (get. MOPs. m) 45
Reporting • Publications Utilizing the Adapt. SAPS Challenge Problem are encouraged to: – Acknowledge the Adapt. SAPS SDMS web site. – Include the missions defined here as examples in the sequence order of 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 – Include other user-defined missions and orders of missions – Include MOPA, E, and D (as reported by Adapt. SAPS) for each mission – Describe the methods used and the extent to which development was done outside the Adapt. SAPS set-up • Because the data is not sequestered, the Adapt. SAPS process attempts to force adaptation by controlling presentation of information, but the lack of sequestration remains a concern. The legitimacy of adaptive performance claims may be best supported by a description of the approach with sufficient detail to allow duplication of results. 46
Future Version Considerations • Exploitation Model Implementation – Including information about priors • Synthetic Effects – Note - synthetic effects apply to Target, Confuser, and Clutter Data – Noise Level – Shadow • Confidence Intervals for MOPA and its components • Methodology – – – May not provide initial batch training set May provide more detailed truth (e. g. , target type, aspect, …) May score more detailed reports (e. g. , target type, aspect, …) May not provide any truth (i. e. , unsupervised) May Provide imagery / truth on a predetermined schedule rather than on-demand – May provide imagery in sets rather than as individual images Please provide suggestions for Version 2. 0 47
- Slides: 47