Posts Tagged: accuracy

Home » accuracy

What is the accuracy of small UAVs in traffic measuring?

New research paper called “How accurate are small drones for measuring microscopic traffic parameters?” published in Transportation Letters Journal can provide an answer. Adam Babinec, one of the main IT developers standing behind DataFromSky solution, co-author of the article conducted by University of Athens and Brno University of Technology, says:
„With this paper, we wanted to examine the potential of using sUAV as part of the ITS infrastructure as a way of extracting naturalistic trajectory data from aerial video footage from a low volume intersection and a pedestrian passage. Moreover, we have examined the accuracy of speed data collected from a drone compared to data collected from an On-Board Diagnostics II (OBD-II) device.”

The results show, that the distortion correction and calibration phase of the video are of crucial importance for high level of accuracy, in terms of stabilizing and geo-referencing the video. In order to extract accurate data from drone video footage, each aerial video must be stabilized to compensate for camera movement, since sudden vibrations or small wind gusts can lead to large errors on the ground.


Picture: a lightweight hexacopter (left), study area – low volume intersection in University of Athens campus (right)

The research paper is accessible on this page.

The accuracy of DataFromSky system itself was examined in another research paper, created in collaboration with Faculty of Information Technology and Faculty of Civil Engineering at Brno University of Technology and posted in International Journal of Transportation Science and Technology, for more information click here.

Read more

Can DataFromSky be accurate enough for your application?

Our new scientific article answers… Since the inception of DataFromSky we were actively collaborating with academics to analyse traffic and design safer transportations systems for a better future. Beside the development of the processes to extract and analyse trajectories of the vehicles from aerial videos, we also aimed to analyse the accuracy of our approach itself.

In collaboration with Faculty of Information Technology and Faculty of Civil Engineering at Brno University of Technology, we are working to analyse the accuracy of object position estimation and accuracy of extracted trajectories and their properties when estimated by a low flying UAVs. (see our news from a year ago).

In the previous year, we have developed a tool to assess the accuracy of object position estimation algorithm similar to the one used in DataFromSky. We used this tool to estimate the accuracy of object position estimation from aerial imagery captured by a general-purpose drone in various scenes and compared the results with spatial data collected with an industrial grade GPS sensor. A part of this research and its results have been recently published in special issue on Unmanned Aerial Vehicles in peer-reviewed scientific journal International Journal of Transportation Science and Technology and is already available for pre-press preview at the following link: http://dx.doi.org/10.1016/j.ijtst.2017.02.002

The article provides an insight into the nature of the accuracy of position estimation and properties of uncertainty propagation through the algorithm with respect to various aspects of the camera, scene and its setup. The additional contribution of article is to provide a guiding tool to properly choose and set the drone pose and camera to achieve the desired accuracy of the position estimation of objects in the traffic scene prior the capture of the scene itself.

Picture: “Spatial visualisation of the resulting position estimation error in metres caused by non-linear deformation, across the camera field of view. The 4 red crosses represent the images of the landmarks. The camera is situated at position (0,0,100)[m] looking directly down.”

Read more

Accuracy and error – model and measurements

If you follow our work for a longer time, you have probably noticed that so far we did not provide any exact numbers on accuracy and errors of the measurements. However, a method providing quantitative data without any quantification of its precision is somewhat dubious. Thus it should come as no surprise that we were working on these matters zealously behind the scene.

So, what do we have now? We have created a model of the whole process which generates our data, and we are currently working on a set of measurements to validate it.

Model

We have created a mathematical model of the whole process which generates our data. This includes the physical reality at target, complete optical system, and digital processing. The following error sources were considered:

  • landmark location errors (in meters)
  • landmark pixel uncertainty (in pixels)
  • camera intrinsic parameters (in pixels)
  • target pixel uncertainty (in pixels)
  • air turbulence (in pixels – included in target/landmark pixel uncertainty)

Using this model, we were able to relate together many variables of the setup – achieved accuracy, distance, covered area, incidence angle, slant range… The results were encouraging. The following picture shows area covered when using a 4k camera, depending on incidence angle and slant range, assuming maximal error of 0.5 meters:

Accuracy for a 4k camera, depending on incidence angle and slant range

What can one read from the chart? As you can see, the incidence angle of about 40° is a reasonable cutoff value. Slant range of 140 meters at 0° (i.e.: directly overhead) gives the best value. For a HD camera, the area covered is a quarter of that for 4k, and optimal altitude in zenith is halved – 70 meters.

We can also overlay the model’s predicted accuracy onto real pictures – that is, display achieved accuracy along with the footage. We hope to eventually incorporate that functionality into DataFromSky Viewer, so that you could check yourself. For now, we have this picture from the Randers video (in HD). Numbers are error in meters, with respective isolines displayed. A 4k video would yield half the error.

Achieved accuracy example in a real scene

Validation measurements

In order to validate the model, we made a set of measurements at a suitable place near Popice, a small Southern Moravian village known by the vineyards in the area.

Popice area used for measurements

We placed a regular grid of 64 landmarks in an 8×8 square pattern, so that a side of the square was exactly 100 meters. The landmarks were positioned using a professional GPS in differential mode, achieving placement accuracy of about 5cm.

Split picture: Calibration landmark, Jiří Apeltauer holding the gps receiver

Then, we set up an UAV to fly around and take a video, in 4k of course. Here is the trajectory projected onto ground, looking at the area from the west.

UAV trajectory projected onto ground

We simply imported the recorded video into DataFromSky and added the landmarks as tracked objects. You can’t see them in the picture because the red ID label “pin heads” are larger than an A4 at that resolution, but they are there.

UAV trajectory projected onto ground

Results

We are still working on processing the results. So far, the agreement between model and measurements is very good, and the model output suggests accuracy greater than we hoped for!

We will publish the results in an academic journal paper. Hopefully, the paper will be finished in a few days and we will be able to share more!

Since this text is about accuracy, we can hint that there is more to come: We measured the vehicle position using a vehicle-mounted dGPS as well, so there is be another set of data to work on.

Read more