Optical imagery provides a powerful means of autonomous navigation for spacecraft in the vicinity of Solar system bodies, where the communication delay may be much longer than the dynamical timescales. Cameras are now routinely included among the suite of sensors used to guide exploratory spacecraft, with information extracted from the images fused with that of other sensors to estimate the spacecraft state and plan orbital maneuvers. The information content of a typical image is orders of magnitude larger than that supplied by other sensors, and sophisticated processing algorithms are required to extract navigation information and reduce the raw pixel data to a manageable size. It is common to offload this task to a separate image processing system rather than integrate this directly with the GNC system. We have developed an image processing algorithm that identifies and tracks surface features through a sequence of images, in a manner robust to changes in the viewing angle, illumination and scale such as might occur during descent and landing onto a planetary surface or orbit around a small body. It does not rely on the presence of any particular type of morphological feature, and may be used for absolute navigation provided a suitable map of the terrain is available. We will describe the operation of this algorithm, present results based on simulated image data generated by the PANGU tool, and discuss further ways in which imaging data can be used to support navigation.
|Title of host publication||8th International ESA Conference on Guidance, Navigation & Control Systems|
|Subtitle of host publication||GNC 2011|
|Publication status||Published - 2011|
|Event||8th International ESA Conference on Guidance, Navigation & Control Systems - Czech Republic, Karlovy Vary, United Kingdom|
Duration: 5 Jun 2011 → 10 Jun 2011
|Conference||8th International ESA Conference on Guidance, Navigation & Control Systems|
|Period||5/06/11 → 10/06/11|