If you follow our work for a longer time, you have probably noticed that so far we did not provide any exact numbers on accuracy and errors of the measurements. However, a method providing quantitative data without any quantification of its precision is somewhat dubious. Thus it should come as no surprise that we were working on these matters zealously behind the scene.
So, what do we have now? We have created a model of the whole process which generates our data, and we are currently working on a set of measurements to validate it.
We have created a mathematical model of the whole process which generates our data. This includes the physical reality at target, complete optical system, and digital processing. The following error sources were considered:
- landmark location errors (in meters)
- landmark pixel uncertainty (in pixels)
- camera intrinsic parameters (in pixels)
- target pixel uncertainty (in pixels)
- air turbulence (in pixels – included in target/landmark pixel uncertainty)
Using this model, we were able to relate together many variables of the setup – achieved accuracy, distance, covered area, incidence angle, slant range… The results were encouraging. The following picture shows area covered when using a 4k camera, depending on incidence angle and slant range, assuming maximal error of 0.5 meters:
What can one read from the chart? As you can see, the incidence angle of about 40° is a reasonable cutoff value. Slant range of 140 meters at 0° (i.e.: directly overhead) gives the best value. For a HD camera, the area covered is a quarter of that for 4k, and optimal altitude in zenith is halved – 70 meters.
We can also overlay the model’s predicted accuracy onto real pictures – that is, display achieved accuracy along with the footage. We hope to eventually incorporate that functionality into DataFromSky Viewer, so that you could check yourself. For now, we have this picture from the Randers video (in HD). Numbers are error in meters, with respective isolines displayed. A 4k video would yield half the error.
In order to validate the model, we made a set of measurements at a suitable place near Popice, a small Southern Moravian village known by the vineyards in the area.
We placed a regular grid of 64 landmarks in an 8×8 square pattern, so that a side of the square was exactly 100 meters. The landmarks were positioned using a professional GPS in differential mode, achieving placement accuracy of about 5cm.
Then, we set up an UAV to fly around and take a video, in 4k of course. Here is the trajectory projected onto ground, looking at the area from the west.
We simply imported the recorded video into DataFromSky and added the landmarks as tracked objects. You can’t see them in the picture because the red ID label “pin heads” are larger than an A4 at that resolution, but they are there.
We are still working on processing the results. So far, the agreement between model and measurements is very good, and the model output suggests accuracy greater than we hoped for!
We will publish the results in an academic journal paper. Hopefully, the paper will be finished in a few days and we will be able to share more!
Since this text is about accuracy, we can hint that there is more to come: We measured the vehicle position using a vehicle-mounted dGPS as well, so there is be another set of data to work on.