POSE ESTIMATION FOR NONCOOPERATIVE SPACECRAFT RENDEVOUS USING CNN

  • Slides: 9
Download presentation
POSE ESTIMATION FOR NON-COOPERATIVE SPACECRAFT RENDEVOUS USING CNN Ryan Mc. Kennon-Kelly Sharma, Sumant, Connor

POSE ESTIMATION FOR NON-COOPERATIVE SPACECRAFT RENDEVOUS USING CNN Ryan Mc. Kennon-Kelly Sharma, Sumant, Connor Beierle, and Simone D’Amico. “Pose Estimation for Non. Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks, ” September 19, 2018. https: //arxiv. org/abs/1809. 07238.

POSE ESTIMATION VIA MONOCULAR VISION Motivation: • On-board determination of pose (i. e. relative

POSE ESTIMATION VIA MONOCULAR VISION Motivation: • On-board determination of pose (i. e. relative position and attitude) of target spacecraft is highly enabling for future on-orbit servicing missions • Monocular systems (e. g. single camera) are inexpensive, often of low SWAP (size, weight, and power), and therefore are easily integrated into spacecraft • Current approaches depend on classical image processing techniques to identify features • This often requires them to be ‘hand engineered’ Critical Issues: 1) Robustness to adverse illumination conditions 2) Scarcity of datasets required for training and benchmarking 1) E. g. may further be infeasible on-board due to computational complexity of traditional algorithms’ evaluation of a large number of pose hypotheses GEORGE MASON UNIVERSITY

METHODS • • Problem: Need to determine the attitude, position of Camera Reference C

METHODS • • Problem: Need to determine the attitude, position of Camera Reference C wrt the body frame of the target B Proposed solution: Train a CNN for this problem Challenge: Manual collection and labeling of large amounts of space imagery is extremely difficult Approach: 1. 2. 3. Create a pipeline for automated generation and labeling of synthetic space imagery Leverage Transfer Learning which pre-trains CNN on existing dataset (Image. Net) Develop and explore several (5) separate networks against 7 x separate datasets GEORGE MASON UNIVERSITY

GEORGE MASON UNIVERSITY

GEORGE MASON UNIVERSITY

SYNTHETIC IMAGES GENERATED W/KNOWN POSE LABELS GEORGE MASON UNIVERSITY

SYNTHETIC IMAGES GENERATED W/KNOWN POSE LABELS GEORGE MASON UNIVERSITY

NETWORKS DEVELOPED AND EXPERIMENTAL RESULTS GEORGE MASON UNIVERSITY

NETWORKS DEVELOPED AND EXPERIMENTAL RESULTS GEORGE MASON UNIVERSITY

“NOTHING’S PERFECT” – COMPARISON OF HIGH VS. LOW CONFIDENCE SOLUTIONS High Confidence Solutions Low

“NOTHING’S PERFECT” – COMPARISON OF HIGH VS. LOW CONFIDENCE SOLUTIONS High Confidence Solutions Low Confidence Solutions GEORGE MASON UNIVERSITY

CONCLUSIONS • Size of training set correlates with accuracy • • Classification networks trained

CONCLUSIONS • Size of training set correlates with accuracy • • Classification networks trained with some noise did better on test images with even high gaussian “white noise” • • Proves several low-level features of spaceborne imagery are also present in terrestrial objects Also proves there is no need to train a network completely from scratch Accuracy of the network trained on the largest dataset (75 k images w/3000 pose labels) higher than classical feature detection algorithms (3 x “high confidence” solutions) • • Proves that as long as the sensor noise can be modeled or removed via post-processing, the CNN has good potential for success in real-world application All networks trained using transfer learning – which only required training the last few layers • • • Warrants generation of larger synthetic data sets by varying the target location and orientation Shows promise of methods Several caveats representing potential for future development: • • Networks need to be tested with on-orbit imagery Larger dataset required for comprehensive comparative of CNN-based pose determination vs. conventional techniques Performance w/other spacecraft or orbit regimes untested Ties of pose estimation to navigation performance needs to be explored GEORGE MASON UNIVERSITY