AirGon Happenings

I am pleased to announce that AirGon’s request for amendment to its Section 333 waiver for flying commercial small Unmanned Aerial Systems (sUAS) was approved in April.  Our amendment adds all current and future 333 approved aircraft to our 333.  AirGon can now fly any sUAS that has ever been approved by the FAA as well as all future approved systems.  This list currently contains 1,150 different sUAS (AirGon’s own AV-900 is number 207 on the list).  This provides us a lot of flexibility in working with clients; for example, in situations where a glider sUAS is more efficient than a rotor craft.

The FAA has also recently streamlined the process of obtaining an N number for a sUAS.  Prior to the change, a paper process that required several months was the only option.  Now an online system is available, greatly simplifying this procedure.  Note that this is not the new online registration system for hobby drones but rather the system used for obtaining an N number for a manned aircraft (if you are confused, join the club!).  Combined with our new 333 amendment, we can now get a new aircraft legally operating within days.

We continue to do a lot of work to optimize the accuracy of point clouds derived from dense image matching (DIM).  DIM are the data of choice for sUAS mapping since they can be generated from low cost prosumer cameras using standard application software such as Pix4D Mapper or PhotoScan.  The question always remains as to how good these data really are.

It has taken us a lot of experimentation and analysis but we think we have fleshed out a procedure for assuring good absolute vertical accuracy.  It involves the use of Real Time Kinematic (RTK) Global Navigation Satellite System (GNSS) positioning on the sUAS, a local base station that we tie into the national Continuously Operating Reference Station (CORS) network and the National Geodetic Survey’s Online Positioning User Service (OPUS) to “anchor” the project to the network.  We have also discovered that high vertical accuracy cannot be obtained without camera calibration.  We typically use an in situ process for calibration.  We have flown many dozens of sites (primarily mining), giving us a rich set of test data.

I cannot over emphasize how critical network vertical accuracy is.  Most customers want elevation maps of their sites.  These are usually delivered as contour vector files.  As we all know, a 1 foot contour requires vertical accuracy of 1/3 of a foot.  This is a very tight requirement!  A three inch vertical bias error over an acre is an error of about 400 cubic yards – this is significant.

We see a lot of drone companies processing site data with no control and no RTK/PPK.  While, with the introduction of scale into the model (many companies do not even do this), one might obtain reasonable difference computations (such as volumes), the network accuracy is very poor (obtained from the airborne navigation grade GNSS only) and hence the data are of limited use.  We have discovered that these techniques (where no control and/or RTK/PPK is used) can result in the vertical scale being incorrectly computed.  This means that even differential measurements are not accurate.  Why spend all of the money to collect these data if they are of unknown accuracy?

A more difficult area that we have studied over the past several years is what I refer to as “conformance.”  That is, how well does the DIM actually fit the object being imaged?  DIM processing software (again, such as Pix4D and PhotoScan) do a miraculous job correlating a 3D surface model from highly redundant imagery using the general class of algorithm called Structure from Motion (SfM).  In addition to the obvious areas where SfM fails (deep shadow, thin linear objects such as poles and wires), a lot of subtle errors occur due to the filtering that is performed by the SfM post-extraction algorithms.  These filtering algorithms are designed to remove noise from the surface model.  Unfortunately, any filtering will also remove true signal, distorting the surface model.

We are working with several of our mining customers to quantify these errors and, once these errors are characterized, to develop best practices to minimize or at least recognize when and where they occur.  An example of an analysis is shown in Figure 1.  Here we are analyzing a small pile (roughly outlined in orange) of very coarse aggregates with a volume of about 340 cubic yards.  This site was flown with a very high end manned aircraft LIDAR system and with AirGon’s AV-900 equipped with our RTK system.  The DIM was created using Agisoft PhotoScan.  We obtained excellent accuracy as determined by a number of signalized (meaning ground targets visible in the imagery) control and supplemental topo only shots.  We used in situ calibration to calibrate the camera (a Sony NEX-5 with a 16 mm pancake lens).

As can be seen in Figure 1, we created a series of cross sections over the test pile.  These cross sections were generated using the Cross Section Point Cloud Task (PCT) in LP360/Topolyst.  This tool drapes cross sections at a user specified interval, conflating the elevation value from the user specified point cloud.  We ran the task twice, conflating Z first from the LIDAR point cloud and then from the DIM.   In Figure 1 we have drawn a profile over one of the cross sections with the result visible in the profile view.  The red cross section is derived from the LIDAR and the green from the DIM.

Comparing LIDAR (red) to DIM (green)

Comparing LIDAR (red) to DIM (green)

Note that the DIM cross section (green) is considerably smoother than the LIDAR cross section (red).  This is caused by several factors:

  • The aggregate of this particular pile is very coarse with some rocks over 2 feet in diameter. This leaves a very undulating surface.  The LIDAR is fairly faithfully following this surface whereas the DIM is averaging over the surface.
  • The AV-900 flight was rather high and the data was collected with a 16 mm lens. This gave a ground sample distance (GSD) a little higher than is typical for this type project.
  • Due to the coarseness of the aggregate, significant pits appear between the rocks, creating deep shadows. SfM algorithms tend to blur in these regions, rendering the elevation less accurate than in areas of low shadow and good texture.

The impact of lower conformance is a function of both the material and the size of the stockpile (if stockpiles are what you are measuring).  For small piles with very coarse material (as is the case in this example) a volumetric difference between LIDAR and SfM can be as great as 20%.  On larger piles with finer aggregates, the conformance is significantly better.   For example, in this same test project, we observed less than 0.25% difference between LIDAR and the DIM on a pile of #5 gravel containing about 30,000 cubic yards.

There still remains the question of which is more accurate – the volume as computed from the LIDAR or the volume as computed from the DIM?  I think that if the LIDAR are collected with a post spacing ½ the diameter of the average rock, the LIDAR will be the most accurate (assuming that it is well calibrated and flown at very low altitude).   However, the DIM is certainly sufficiently accurate for the vast majority of aggregate volumetric work, so long as a very strict adherence to collection and processing best practices is followed.  For most high accuracy volumetric projects, manned LIDAR flights are prohibitively expensive.

We continue to do many experiments with local and network accuracy as well as methods to improve and quantify conformance.  I’ll report our results here and in other articles as we continue to build our knowledge base.

FacebookTwitterGoogle+LinkedInShare

Top Ten Considerations for Selecting a Drone Mapping Services Vendor

You realize that significant benefits would be realized by transitioning mine site mapping/volumetrics to drones (more properly, small Unmanned Aerial Systems, sUAS). You have decided, at least for the immediate future, to use an outside service provider rather than internalize the process.

Since you have, at least for the present, decided to outsource drone-collected mapping and volumetrics, the task now is to select a qualified company to perform these services. A checklist for evaluating a potential service provide should include these questions:

  • Is the vendor authorized to fly by the appropriate regulatory body (e.g. in the USA, the Federal Aviation Administration requires a Section 333 exemption permitting commercial drone flights)?Drone Picture
  • Does the vendor have sUAS aircraft liability insurance?
  • Are the rights to the collected data clearly spelled out?
  • Do you feel confident that the vendor’s methodology for rigorous network/local accuracy (surveying accuracy) will meet your requirements? For example, a 4 inch vertical error in a borrow pit computation amounts to about 538 cubic yards of volumetric error per acre!
  • For projects that require Network Accuracy (anytime you intend to extract information such as elevation models, contours or are performing time series analysis, you will need Network Accuracy), can your service provider tie their results to a reference network that can be independently verified?
  • Does the vendor have a plan for incorporating surveyed quality assurance check points that will be captured in the aerial flight?
  • Does the vendor understand how to incorporate design information such as “bottom” lines, reclaim tunnel models, complex a priori stockpile toes and so forth into the modeling process?
  • Does the vendor have a reasonable approach to allowing you to collaborate on resolving project boundaries, stockpile identification, stockpile toe definitions, occluded areas and so forth?
  • Have proposed ground personnel worked on mine sites and have safety awareness? For example, for USA mine site operations, do they have basic MSHA Part 46 training?
  • Can the vendor provide references?

You should engage in a pilot project with your candidate vendor. This will limit your initial investment and give you an opportunity to fully vet the proposed provider before committing to a long term relationship. You will want to have independent test data to validate the vendor’s solution.

An immediate red flag is a potential vendor who will not explain their methods in detail, hiding behind a veil of “well, that is our proprietary method that sets us apart from our competitors.” The plain English translation of this is “I have no clue!”

Accuracy, Precision and all that

I was recently at a Transportation Research Board subcommittee meeting where we were discussing accuracy and precision (no one used the word “resolution”). After listening for a bit, I realized a sense of Deja vu. It was a TRB meeting several years ago that inspired me to write an article for my column in LIDAR News (“Random Points”) on this subject. I am repeating the essence of that article here since there is a follow-on discussion that requires this foundation.

There is a lot of argument out in the data acquisition community surrounding these topics. It nearly always comes up in arguments about why a particular vendor’s aerial data is better than that stuff on Google Earth®. There is also the old saw – “bad data is better than no data.” Of course, depending on your use, you better be able to quantify how bad and in what ways.

It’s no wonder that a lot of confusion exists over quantifying accuracy. Every time I have been in a room full of experts, we argue about the specific meaning of the terms. Since I have the floor here for the moment, we’ll go with my descriptions! A caveat however – this article is meant to provide a bit of insight. It is not a vetted technical article and thus you should use my descriptions and analogies with a lot of caution.

We will specifically look at geopositional accuracy as opposed to other accuracy issues such as attributes (i.e. is the ‘color’ attribute correct?). For a more detailed look at geopositional measurement, I think the Washington State DOT “Highway Surveying Manual” is an excellent read (easily found on the web). On the other hand, I find the FGDC standards very dry and light on explanation.

To discuss the geopositional quality of data, I think you need to fully understand the following terms:

  • Network Accuracy (often called “Absolute Accuracy”)
  • Local Accuracy
  • Precision
  • Resolution
  • Density

There is no other way to do this than to just jump in so here we go!

Pick up a ruler and look at it. The fineness (spacing) of the tick marks is the resolution. Similarly, if you have a digital volt meter, the number of digits in the display determines the resolution (it’s a bit more detailed than this but this is close enough for our purposes). Note that this parameter has nothing to do with ‘precision’ or ‘accuracy.’

Precision is a measure of the repeatability of a measurement under identical environmental circumstances (meaning, for example, that if you made repeated length measurements with a steel tape over a number of days where the temperature varied, you would violate the ‘identical environmental conditions’ restriction). It always speaks to repeating the same measurement multiple times. Since in LIDAR and imaging work, we very seldom do repeated measurements, this is perhaps the most misrepresented term in our work. Here is a simple experiment that illustrates precision. Take a tape measure and measure the height of a door. Now, using the exact same measurement spot, tape and technique, repeat this 9 more times. Write down your readings to the highest level of resolution supported by your tape (remember resolution?). The range of your readings gives you a measure of the precision, not only of your device (the tape) but your system (where you place your eye each time, how close you are at measuring the same spot, how hard you pull on the tape and so on). The assumption here is that you are measuring some constant object so that variation is solely due to you and your device, not the object being measured. In reality, this may or may not be the case! Now those of you who have studied basic statistics know that if you repeat these measurements enough times (say 30), a plot of the results will produce the ubiquitous Normal (Gaussian, bell, etc.) curve. Precision is statistically quantified as variance (or standard deviation). Now notice that a tape made of steel and a tape, with identical ticks, made of rubber, will have the same resolution but radically different precision.

I hope that you notice that we still have not touched on accuracy. For example, suppose we did our experiment of making 30 repeated measurements of the height of a specific spot on a door with an uncalibrated (more on this later) tape having a resolution of 0.001 meters. Suppose we came up with an average measurement value of 2.000 m, a largest reading of 2.002 m and a smallest reading of 1.097 m (for you statisticians, let’s say we have a standard deviation of 1.8 mm). What can we say at this point? Well, the resolution of our tape is simply a given (we will ignore fudging resolution by linear interpolation). Our measurement precision is quite “good” with a maximum deviation of only 3 times the resolution of our tape. However, we cannot say anything at all about accuracy!

Here’s the problem. I can just go in to my workshop and whack off a piece of electrician’s steel fish tape. I can mark it with measurements (OK, this would be pretty tedious, I agree!) by just eyeballing. and voilà, I have a steel tape! It would be quite precise if I did not subject it to temperature variations during my sequence of 30 measurements. However, it would, no doubt, be quite inaccurate when compared to a known length. And this is key – you cannot make a judgment about accuracy without having a ‘standard’ to which you are making comparisons.

In geopositional work, we are concerned with two types of accuracy. Network Accuracy (which I usually call absolute accuracy but this is a really loose term) talks about how closely your measurements match a known external reference system (what we call a ‘datum’). Local Accuracy (also often called relative accuracy) deals with the accuracy of the measurement of metric units with respect to a standard. By this we mean if you measure a length or an area, how ‘close’ are you to the true value? Note that you can very accurately measure the distance between two fixed points yet be clueless as to the location of the points relative to some outside reference system (again, the ‘datum’). This is the case with our measurement of the door. If I calibrated my tape by comparing to a ‘standard’ meter, I could then use it to very accurately and precisely (the precision coming from my construction of the tape as verified in my repeated measurements experiment) measure the height of the door. Yet I still would have no idea of where in the ‘world’ the two end points of my measurements were located. This is an example of very good relative (local) accuracy yet very poor network (absolute) accuracy.

So finally we are left with the term, density. This parameter is not related to accuracy, precision or resolution. In LIDAR work, it would be the number of points per unit area. In imagery work, it would be the number of pixels per unit area. Note that this is often called ‘resolution.’ In imagery work, if using an array sensor, it may be roughly synonymous with resolution. When using scanning LIDAR systems, it is seldom synonymous with resolution. Am I splitting hairs here? No, not at all. If you followed the above discussion, you will realize that precision is based on a quantum number and the size of the quanta is the resolution, not the density (or point spacing). In looking back over this paragraph, I have confused even myself! Basically what I am saying is that it is entirely possible to limit the scanning density of a LIDAR system to roughly 1 point per meter yet have an available horizontal resolution of a few centimeters.

Figure 1 provides a nice physical representation of these terms. Note that resolution would be the width of the target rings. Here we are imagining that the bulls eye represents a known location in our datum (maybe we placed some rings around a National Geodetic Survey monument and are taking pot shots with the old Winchester!).

Acc and Precisoin, fig 1

Figure 1: Accuracy, Precision, Resolution

A figure that I lifted directly from Wikipedia (Figure 2) provides a more statistical view of accuracy versus precision. Note here that the distance of the mean (average) of the repeated measurements speaks to the accuracy whereas the ‘spread’ of the measurements (variance) speaks to the precision.

Accuracy, fig 2

 

Figure 2: A more statistical view of Accuracy and Precision

With this foundation in terminology, we will address how these factors play in to LIDAR and other point cloud data – stay tuned.