sUAS Remote Sensing Course
Principal Investigator(s)
Carl Salvaggio
Research Team Members
Timothy Bauch, Lucy Falcon, Joseph Klesczewski, Sophia Kourian, David Lewis, Baabak Mamaghani, Nina Raqueño, Donald Shultz
Project Description
In its second offering, this year-long course has exposed the students to the real-world considerations that must be under- taken in the design of a new, or use of an existing, imaging systems to be utilized as part of a small unmanned aircraft system (sUAS). This year the students took an in-depth look at the MicaSense RedEdge sensor used on-board the SIRA/ DIRS MX-1 platform. The students performed an end-to-end radiometric characterization/calibration and error analysis as well as a geometric characterization of the 5 individual sensors that compose this multi-camera system.
The radiometric characterization included:
Radiometric calibration from normalized digital count to radiance (allowing the camera to be operated in auto-exposure mode; adjusting both its integration time and gain)
- Dark noise characterization over the full operational temperature range of the camera (32° to 100°F) at each sensor gain setting
- Determination of vignette correction for flat fielding each camera (across all integration times and gains)
- Determination of the standard errors associated with the integration time, the gain setting, the radiometric calibration coefficients, and normalized digital count
- Formation of a partial derivative-based error computation to determine the radiance error that is expected for each pixel in each multispectral band image
- Determination of operational lighting levels that this camera could be expected to perform properly (without exhibiting significant degradation in operation)
The students developed their own normalized digital count to radiance methodology and computed the expected errors associated with their method and the calibration method provided by the manufacturer.
The geometric characterization included:
- The measurement of the system modulation transfer function (MTF) of each camera. MTF was determined using both laboratory and in-field methods. The laboratory approach utilized a knife-edge target in a collimator using the standard slant-edge MTF computations. The students implemented their own versions of the slant-edge method and compared their results to the standard implementation available from Peter Burns. The field approach utilized the sensor on-board the SIRA/DIRS MX-1 platform in multiple scenarios; hovering in a stationary position at 100, 200, and 400 feet as well as in a standard flying configuration at normal operational flight speeds. MTF was computed in the along- and across-track directions using large deployed slant edge targets on the ground.
- The determination of the sensor’s interior orientation and geometric distortion was carried out for each camera in the array. Two approaches were used; the students used the checkerboard approach and the Australis approach. The students were responsible for implementing the checkerboard analysis code for determining the interior orientation and distortion parameters. The distortion corrections from both methodologies were applied to real-world imagery and the efficacy of the results were compared using large in-scene straight edges.
Project Status:
The Fall and Spring offerings of the class finished up in May 2019. The methods and results obtained from the class are current being summarized in an article to be submitted to MDPI Sensors and of course shared with the manufacturer of the camera system studied, MicaSense.
Figures and Images